OpenCV perspectiveTransform broken function - homography

Im trying to use perspectiveTransform but I keep getting error. I tried to follow the solution from this thread http://answers.opencv.org/question/18252/opencv-assertion-failed-for-perspective-transform/
_players[i].getCoordinates() is of type Point
_homography_matrix is a 3 x 3 Mat
Mat temp_Mat = Mat::zeros(2, 1, CV_32FC2);
for (int i = 0; i < _players.size(); i++)
{
cout << Mat(_players[i].get_Coordinates()) << endl;
perspectiveTransform(Mat(_players[i].get_Coordinates()), temp_Mat, _homography_matrix);
}
Also, how do I convert temp_Mat into type Point ?
OpenCV Error: Assertion failed (scn + 1 == m.cols) in cv::perspectiveTransform

Basically you just need to correct from
Mat(_players[i].get_Coordinates()) ...
to
Mat2f(_players[i].get_Coordinates()) ...
In the first case you are creating a 2x1, 1 channel float matrix, in the second case (correct) you create a 1x1, 2 channel float matrix.
You also don't need to initialize temp_Mat.
You can also use template Mat_ to better control the types of your Mats. E.g. creating a Mat of type CV_32FC2 is equivalent to create a Mat2f.
This sample code will show you also how to convert back and forth between Mat and Point:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
// Some random points
vector<Point2f> pts = {Point2f(1,2), Point2f(5,10)};
// Some random transform matrix
Mat1f m(3,3, float(0.1));
for (int i = 0; i < pts.size(); ++i)
{
cout << "Point: " << pts[i] << endl;
Mat2f dst;
perspectiveTransform(Mat2f(pts[i]), dst, m);
cout << "Dst mat: " << dst << endl;
Point2f p(dst(0));
cout << "Dst point: " << p << endl;
}
return 0;
}

Related

Why is my Grid Traveler Memoization still sticking?

I'm currently working on implementing memoization into the Grid Traveler problem. It looks like it should work, but it's still sticking on bigger cases like (18,18). Did I miss something, or are maps not the right choice for this kind of problem?
P.S. I'm still very new at working with maps.
#include <iostream>
#include <unordered_map>
#include <string>
using namespace std;
uint64_t gridTravMemo(int m, int n, unordered_map<string, uint64_t>grid)
{
string key;
key = to_string(m) + "," + to_string(n);
if (grid.count(key) > 0)
return grid.at(key);
if (m == 1 && n == 1)
return 1;
if (m == 0 || n == 0)
return 0;
grid[key] = gridTravMemo(m-1, n, grid) + gridTravMemo(m, n-1, grid);
return grid.at(key);
}
int main()
{
unordered_map<string, uint64_t> gridMap;
cout << gridTravMemo(1, 1, gridMap) << endl;
cout << gridTravMemo(2, 2, gridMap) << endl;
cout << gridTravMemo(3, 2, gridMap) << endl;
cout << gridTravMemo(3, 3, gridMap) << endl;
cout << gridTravMemo(18, 18, gridMap) << endl;
return 0;
}
The point of memorized search is to optimize running time by returning any previous values that you have calculated. This way, instead of a brute force algorithm, you can reach a runtime of O(N*M).
However, you are passing your unordered_map<string, uint64_t>grid as a parameter for your depth-first search.
You are calling grid[key] = gridTravMemo(m-1, n, grid) + gridTravMemo(m, n-1, grid); This means that your search is splitting into two branches. However, the grid in these two branches are different. This means that the same state can be visited in two separate branches, leading to a runtime more like O(2^(N*M)).
When you're testing an 18x18 grid, this definitely will not run quickly enough.
This is relatively easy to fix. Just declare grid as a global variable. This way its values can be used between different branches.
Try something like this:
#include <iostream>
#include <unordered_map>
#include <string>
using namespace std;
unordered_map<string, uint64_t> grid;
uint64_t gridTravMemo(int m, int n)
{
string key;
key = to_string(m) + "," + to_string(n);
if (grid.count(key) > 0)
return grid.at(key);
if (m == 1 && n == 1)
return 1;
if (m == 0 || n == 0)
return 0;
grid[key] = gridTravMemo(m-1, n) + gridTravMemo(m, n-1);
return grid.at(key);
}
int main()
{
cout << gridTravMemo(1, 1) << endl;
grid.clear()
cout << gridTravMemo(2, 2) << endl;
grid.clear()
cout << gridTravMemo(3, 2) << endl;
grid.clear()
cout << gridTravMemo(3, 3) << endl;
grid.clear()
cout << gridTravMemo(18, 18) << endl;
return 0;
}

How to use OpenMP to deal with two for loops with

I am new to OpenMP... Please help me with this dumb question. Thank you :)
Basically, I want to use OpenMP to speed up two for loops. But I do not know why it keeps saying: invalid controlling predicate for the for loop.
By the way, my GCC version is gcc (Ubuntu 6.2.0-5ubuntu12) 6.2.0 20161005, and OS I am using is Ubuntu 16.10.
Basically, I generate a toy data that has a typical Key-Value style, like this:
Data = {
"0": ["100","99","98","97",..."1"];
"1": ["100","99","98","97",..."1"];
...
"999":["100","99","98","97",..."1"];
}
Then, for each key, I want to compare its value with the rest of the keys. Here, I sum them up through "user1_list.size()+user2_list.size();". As for each key, the sum-up process is totally independent of other keys, which means this works for parallelism.
Here is my toy example code.
#include <map>
#include <vector>
#include <string>
#include <iostream>
#include "omp.h"
using namespace std;
int main(){
// Create Data
map<string, vector<string>> data;
for(int i=0; i != 1000; i++){
vector<string> list;
for (int j=100; j!=0; j--){
list.push_back(to_string(j));
}
data[to_string(i)]=list;
}
cout << "Data Total size: " << data.size() << endl;
int count = 1;
#pragma omp parallel for private(count)
for (auto it=data.begin(); it!=data.end(); it++){
//cout << "Evoke Thread: " << omp_get_thread_num();
cout << " count: " << count << " / " << data.size() << endl;
count ++;
string user1 = it->first;
vector<string> user1_list = it->second;
for (auto it2=data.begin(); it2!=data.end(); it2++){
string user2 = it2->first;
vector<string> user2_list = it2->second;
cout << "u1:" << user1 << " u2:" << user2;
int total_size = user1_list.size()+user2_list.size();
cout << " total size: " << total_size << endl;
}
}
return 0;
}

Controlling cout in a for loop

Here is my code:
#include <iostream>
#include <string>
#include <sstream>
using namespace std;
int main()
{
stringstream os; // Initialize stringstream "os"
string mValue = "month"; // Initialize mValue "month"
int iValue = 1; // Initialize iValue "1"
for(int iValue = 1; iValue < 13; ++iValue) // Iteration 1: 1 < 12 so execute the following:
{
os << mValue << "" << iValue; // Glue mValue and iValue together
cout << os.str() << endl; // Print glued mValue and iValue
}
return 0;
}
This results in the following output:
month1
month2month2
month3month3month3
month4month4month4month4
month5month5month5month5month5
month6month6month6month6month6month6
month7month7month7month7month7month7month7
month8month8month8month8month8month8month8month8
month9month9month9month9month9month9month9month9month9
month10month10month10month10month10month10month10month10month10
month11month11month11month11month11month11month11month11month11month11
month12month12month12month12month12month12month12month12month12month12month12
The desired output is:
month1
month2
month3
month4
month5
month6
month7
month8
month9
month10
month11
month12
Being a noob at coding, I understand why this is happening but I don't know how to fix it. I tried to place cout outside of the for loop but that results in
month1month2month3month4month5month6month7month8month9month10month11month12
I'm out of ideas and I hope you can tell me how to get this right.
you need to add next line escape sequence to your os value in for loop after placing the cout outside the loop.
os << mValue << "" << iValue << "\n";

performing N independent 1D FFT on a 2D matrix with FFTW

I have a 2 dimensional matrix with each column corresponding to one independent signal. I am going to perform N 1D fft on each column. In matlab, apply a fft to a 2D matrix will do the trick. But I am porting my code to c++ with fftw. I wonder if there is a way to do so. I try the following code by setting the column size to 1 and row size to 4 (total row number), but it does not help.
#include <iostream>
#include <complex>
#include "fftw3.h"
using namespace std;
int main(int argc, char** argv)
{
complex<double> data[4][2];
data[0][0] = complex<double>(1,1);
data[1][0] = complex<double>(2,1);
data[2][0] = complex<double>(3,1);
data[3][0] = complex<double>(4,1);
data[0][1] = complex<double>(1,1);
data[1][1] = complex<double>(1,2);
data[2][1] = complex<double>(1,3);
data[3][1] = complex<double>(1,4);
cout << "original data ..." << endl;
cout << data[0][0] << '\t' << data[0][1] << endl;
cout << data[1][0] << '\t' << data[1][1] << endl;
cout << data[2][0] << '\t' << data[2][1] << endl;
cout << data[3][0] << '\t' << data[3][1] << endl;
cout << endl << endl;
fftw_plan plan=fftw_plan_dft_2d(4, 1,(fftw_complex*)&data[0][0], (fftw_complex*)&data[0][0], FFTW_FORWARD, FFTW_ESTIMATE);
fftw_execute(plan);
cout << "after fftw ..." << endl;
cout << data[0][0] << '\t' << data[0][1] << endl;
cout << data[1][0] << '\t' << data[1][1] << endl;
cout << data[2][0] << '\t' << data[2][1] << endl;
cout << data[3][0] << '\t' << data[3][1] << endl;
return 0;
}
Above code takes the first and second row and reshape them to 2x2 matrix then perform a 2D fft.
Up to now, the only way that comes to my mind is as follow. Let's say I have NxM (N rows, M columns), I create M fftw plans for M 1D fftw. I execute M fftw in serial to get the result. But in practical application, the matrix is very big, M is so large. It is very inefficient to do this way. Any better idea? Thanks.
For those stumbling across this nowadays, the FFTW devs have implemented routines for this operation, which is faster than looping through each column and taking a separate transform. You certainly don't want to take a 2D transform (as is shown in the question), which is mathematically different than row-wise 1D transforms.
The key to you question is in fftw_plan_many_dft. Here is a link to the full documentation.
Here is an example (modifed from the above link) that illustrates what you're looking for.
#include "fftw3.h"
int main() {
fftw_complex *A; // array of data
A = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*10*3);
// ...
/* Transform each column of a 2d array with 10 rows and 3 columns */
int rank = 1; /* not 2: we are computing 1d transforms */
int n[] = {10}; /* 1d transforms of length 10 */
int howmany = 3;
int idist = 1;
int odist = 1;
/* distance between two elements in the same column */
int istride = 3;
int ostride = 3;
int *inembed = n, *onembed = n;
/* forward, in-place, 1D transform of each column */
fftw_plan p;
p = fftw_plan_many_dft(rank, n, howmany, A, inembed, istride, idist, A, onembed, ostride, odist, FFTW_FORWARD, FFTW_ESTIMATE);
// ...
/* run transform */
fftw_execute_dft(p, A, A);
// ...
/* we don't want memory leaks */
fftw_destroy_plan(p);
fftw_free(A);
}

How to avoid detecting image frame when using findContours

How can I avoid detecting the frame of an image when using findContours (OpenCV)? Until I found OpenCV findContours allways finds two contours for every object and implemented that answer, I was not detecting the internal object consistently (object line was broken into several pieces), but now I detect the image frame every time.
The image is of a quad-rotor UAV seen from the bottom; I am using a series of pictures for 'training' object detection. For that, I need to be sure that I can consistently get the UAV object. I guess I could invert the colors, but that seems like a dirty hack.
The images are first the input image just before findContours, and the resulting contours. I have seven test images, and all seven has a frame and the UAV. The hu moments are very similar (as expected).
The code (C++11, and quite messy) for finding the contours/objects and calculating the hu moments:
#include <opencv/cv.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <fstream>
#include <string>
using namespace cv;
using namespace std;
#define EROSION_SIZE 1
#define ERODE_CANNY_PREP_ITERATIONS 5
int main() {
Mat image, canny_output, element, padded;
RNG rng(12345);
int numbers[] = {195, 223, 260, 295, 331, 368, 396};
string pre = "/home/alrekr/Pictures/UAS/hu-images/frame_";
string middle = "_threshold";
string post = ".png";
string filename = "";
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
ofstream fout("/home/alrekr/Pictures/UAS/hu-data/hu.dat");
element = getStructuringElement(MORPH_RECT,
Size(2*EROSION_SIZE + 1, 2*EROSION_SIZE+1),
Point(EROSION_SIZE, EROSION_SIZE));
namedWindow("Window", CV_WINDOW_AUTOSIZE);
for (int i : numbers) {
filename = pre + to_string(i) + middle + post;
image = imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
erode(image, image, element, Point(-1,-1), ERODE_CANNY_PREP_ITERATIONS);
imwrite("/home/alrekr/Pictures/UAS/hu-data/prep_for_canny_" + to_string(i) + ".png", image);
findContours(image, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
vector<Moments> mu(contours.size());
if(contours.size() < 1) {
cout << "No contours found" << endl;
} else {
cout << "Contours found: " << contours.size() << endl;
}
vector<Point2f> mc(contours.size());
for(int j = 0; j < (int)contours.size(); j++) {
mc[j] = Point2f(mu[j].m10/mu[j].m00 , mu[j].m01/mu[j].m00);
}
Mat drawing = Mat::zeros(image.size(), CV_8UC3);
for(int j = 0; j < (int)contours.size(); j++) {
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(drawing, contours, j, color, 2, 8, hierarchy, 0, Point());
imshow("Window", drawing);
waitKey(0);
}
imwrite("/home/alrekr/Pictures/UAS/hu-data/cannied_" + to_string(i) + ".png", drawing);
fout << "Frame " << i << "\n";
for(int j = 0; j < (int)contours.size(); j++) {
mu[j] = moments(contours[j]);
double hu[7];
HuMoments(mu[j], hu);
fout << "Object " << to_string(j) << "\n";
fout << hu[0] << "\n";
fout << hu[1] << "\n";
fout << hu[2] << "\n";
fout << hu[3] << "\n";
fout << hu[4] << "\n";
fout << hu[5] << "\n";
fout << hu[6] << "\n";
}
}
fout.close();
return 0;
}
The function cv::findContours describes the contour of areas consisting of ones. The areas in which you are interested are black, though.
So the solution is simple. Invert the input image before detecting contours:
image = 255 - image;
Below is a code example which I derived from your example above:
#include <opencv2/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <string>
#define EROSION_SIZE 1
#define ERODE_CANNY_PREP_ITERATIONS 5
int main( int argc, char ** argv )
{
// Display the version of the linked OpenCV library.
std::cout << "Using OpenCV " << CV_VERSION_MAJOR << "." << CV_VERSION_MINOR << ".";
std::cout << CV_VERSION_REVISION << CV_VERSION_STATUS << std::endl;
// Load the input file.
std::string filename = std::string( argv[ 1 ] );
cv::Mat image = imread( filename, cv::IMREAD_GRAYSCALE );
// Invert the image so the area of the UAV is filled with 1's. This is necessary since
// cv::findContours describes the boundary of areas consisting of 1's.
image = 255 - image;
// Detect contours.
std::vector< std::vector< cv::Point> > contours;
std::vector< cv::Vec4i > hierarchy;
cv::findContours( image, contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE );
std::cout << "Contours found: " << contours.size() << std::endl;
// Display and save the results.
cv::RNG rng( 12345 );
cv::Mat contourImage = cv::Mat::zeros( image.size(), CV_8UC3);
for( size_t j = 0; j < contours.size(); j++ )
{
cv::Scalar color( rng.uniform( 0, 255 ), rng.uniform( 0,255 ), rng.uniform( 0, 255 ) );
cv::drawContours( contourImage, contours, j, color, 2, 8, hierarchy, 0, cv::Point() );
}
// cv::imwrite( "contours.png", contourImage );
cv::imshow( "contours", contourImage );
cv::waitKey( 0 );
return 0;
}
The console output is as follows:
$ ./a.out gvlGK.png
Using OpenCV 3.0.0-beta
Contours found: 1
and the resulting contour image is this:
Another solution would be :-
find the bounding Rectangle of the contour
x,y,w,h = cv2.boundingRect(c)
compare the size of the image with the size of the bounding Rectangle for example
cnt_size=w*h
if(abs(cnt_size-img_size<=ERROR_THRESHOLD):
##discard this contour
If you have white background, first inverse it using THRESH_BINARY_INV type and then use contour.
image = imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
threshold(image,image,100,255,THRESH_BINARY_INV);
findContours( image, contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE );
This will only return the contour that you need.

Resources