I have a 2 dimensional matrix with each column corresponding to one independent signal. I am going to perform N 1D fft on each column. In matlab, apply a fft to a 2D matrix will do the trick. But I am porting my code to c++ with fftw. I wonder if there is a way to do so. I try the following code by setting the column size to 1 and row size to 4 (total row number), but it does not help.
#include <iostream>
#include <complex>
#include "fftw3.h"
using namespace std;
int main(int argc, char** argv)
{
complex<double> data[4][2];
data[0][0] = complex<double>(1,1);
data[1][0] = complex<double>(2,1);
data[2][0] = complex<double>(3,1);
data[3][0] = complex<double>(4,1);
data[0][1] = complex<double>(1,1);
data[1][1] = complex<double>(1,2);
data[2][1] = complex<double>(1,3);
data[3][1] = complex<double>(1,4);
cout << "original data ..." << endl;
cout << data[0][0] << '\t' << data[0][1] << endl;
cout << data[1][0] << '\t' << data[1][1] << endl;
cout << data[2][0] << '\t' << data[2][1] << endl;
cout << data[3][0] << '\t' << data[3][1] << endl;
cout << endl << endl;
fftw_plan plan=fftw_plan_dft_2d(4, 1,(fftw_complex*)&data[0][0], (fftw_complex*)&data[0][0], FFTW_FORWARD, FFTW_ESTIMATE);
fftw_execute(plan);
cout << "after fftw ..." << endl;
cout << data[0][0] << '\t' << data[0][1] << endl;
cout << data[1][0] << '\t' << data[1][1] << endl;
cout << data[2][0] << '\t' << data[2][1] << endl;
cout << data[3][0] << '\t' << data[3][1] << endl;
return 0;
}
Above code takes the first and second row and reshape them to 2x2 matrix then perform a 2D fft.
Up to now, the only way that comes to my mind is as follow. Let's say I have NxM (N rows, M columns), I create M fftw plans for M 1D fftw. I execute M fftw in serial to get the result. But in practical application, the matrix is very big, M is so large. It is very inefficient to do this way. Any better idea? Thanks.
For those stumbling across this nowadays, the FFTW devs have implemented routines for this operation, which is faster than looping through each column and taking a separate transform. You certainly don't want to take a 2D transform (as is shown in the question), which is mathematically different than row-wise 1D transforms.
The key to you question is in fftw_plan_many_dft. Here is a link to the full documentation.
Here is an example (modifed from the above link) that illustrates what you're looking for.
#include "fftw3.h"
int main() {
fftw_complex *A; // array of data
A = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*10*3);
// ...
/* Transform each column of a 2d array with 10 rows and 3 columns */
int rank = 1; /* not 2: we are computing 1d transforms */
int n[] = {10}; /* 1d transforms of length 10 */
int howmany = 3;
int idist = 1;
int odist = 1;
/* distance between two elements in the same column */
int istride = 3;
int ostride = 3;
int *inembed = n, *onembed = n;
/* forward, in-place, 1D transform of each column */
fftw_plan p;
p = fftw_plan_many_dft(rank, n, howmany, A, inembed, istride, idist, A, onembed, ostride, odist, FFTW_FORWARD, FFTW_ESTIMATE);
// ...
/* run transform */
fftw_execute_dft(p, A, A);
// ...
/* we don't want memory leaks */
fftw_destroy_plan(p);
fftw_free(A);
}
Related
I am confused with how should i call MurmurHash3_x86_128() when i have lot of key value. The murmurhash3 code can be found https://github.com/aappleby/smhasher/blob/master/src/MurmurHash3.cpp. Method definition is given below.
void MurmurHash3_x86_128 ( const void * key, const int len,
uint32_t seed, void * out )
I am passing different key value using a for loop as shown below but still the hash value return is same. If i am removing for loop and passing individual key value then the value is different. What am i doing wrong ?
int main()
{
uint64_t seed = 100;
vector <string> ex;
ex.push_back("TAA");
ex.push_back("ATT");
for(int i=0; i < ex.size(); i++)
{
uint64_t hash_otpt[2]= {};
cout<< hash_otpt << "\t" << endl;
const char *key = ex[i].c_str();
cout << key << endl;
MurmurHash3_x64_128(key, strlen(key), seed, hash_otpt); // 0xb6d99cf8
cout << hash_otpt << endl;
}
return 0;
The line
cout << hash_otpt << endl;
is emitting the address of hash_otpt, not its contents.
It should be
cout << hash_otpt[0] << hash_otpt[1] << endl;
Basically the 128-bit hash is split and stored in two 64-bit unsigned integers (the MSBs in one and the LSBs in another). On combining them, you get the complete hash.
I have searched countless forums and websites but I can't seem to find the answer. I'm trying to use SetConsoleTextAttribute but it only affects the text. How can I affect the whole screen like the command color 1f would? My code is:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <wincon.h>
using namespace std;
int main()
{
SetConsoleTitle("C++ CALCULATOR"); // Title of window
int x; // Decision
int a; // First Number
int b; // Second Number
int c; // Answer
HANDLE Con;
Con = GetStdHandle(STD_OUTPUT_HANDLE);
SetConsoleTextAttribute(Con, BACKGROUND_BLUE | FOREGROUND_BLUE | FOREGROUND_GREEN | FOREGROUND_RED);
cout << "CALCULATOR" << endl << endl;
cout << "1:ADDITION" << endl << "2:SUBTRACTION" << endl << "3:MULTIPLICATION";
cout << endl << "4:DIVISION" << endl << "5:EXIT" << endl;
cin >> x;
switch (x)
{
case 1: // Addition code
cout << endl << "ADDITION" << endl << "FIRST NUMBER:";
cin >> a;
cout << endl << "SECOND NUMBER:";
cin >> b;
c = a + b;
cout << endl << "ANSWER:" << c;
break;
case 2: // Subtraction code
cout << endl << "SUBTRACTION" << endl << "FIRST NUMBER:";
cin >> a;
cout << endl << "SECOND NUMBER:";
cin >> b;
c = a - b;
cout << endl << "ANSWER:" << c;
break;
case 3: // Multiplication code
cout << endl << "MULTIPLICATION" << endl << "FIRST NUMBER:";
cin >> a;
cout << endl << "SECOND NUMBER:";
cin >> b;
c = a * b;
cout << endl << "ANSWER:" << c;
break;
case 4: // Division code
cout << endl << "DIVISION" << endl << "FIRST NUMBER:";
cin >> a;
cout << endl << "SECOND NUMBER:";
cin >> b;
c = a / b;
cout << endl << "ANSWER:" << c;
break;
case 5: // Exit code
return 0;
}
}
This solution relies on these WinAPI functions and structures:
GetConsoleScreenBufferInfo to get screen dimensions
FillConsoleOutputAttribute to fill screen with an attribute
CONSOLE_SCREEN_BUFFER_INFO structure to store screen information
The code is as follows:
HANDLE hCon;
CONSOLE_SCREEN_BUFFER_INFO csbiScreenInfo;
COORD coordStart = { 0, 0 }; // Screen coordinate for upper left
DWORD dwNumWritten = 0; // Holds # of cells written to
// by FillConsoleOutputAttribute
DWORD dwScrSize;
WORD wAttributes = BACKGROUND_BLUE | FOREGROUND_BLUE | FOREGROUND_GREEN | FOREGROUND_RED;
hCon = GetStdHandle(STD_OUTPUT_HANDLE);
// Get the screen buffer information including size and position of window
if (!GetConsoleScreenBufferInfo(hCon, &csbiScreenInfo))
{
// Put error handling here
return 1;
}
// Calculate number of cells on screen from screen size
dwScrSize = csbiScreenInfo.dwMaximumWindowSize.X * csbiScreenInfo.dwMaximumWindowSize.Y;
// Fill the screen with the specified attribute
FillConsoleOutputAttribute(hCon, wAttributes, dwScrSize, coordStart, &dwNumWritten);
// Set attribute for newly written text
SetConsoleTextAttribute(hCon, wAttributes);
The inline comments should be enough to understand the basics of what is going with the supplied documentation links. We get the screen size with GetConsoleScreenBufferInfo and use that to determine the number of cells on the screen to update with a new attribute using FillConsoleOutputAttribute . We then use SetConsoleTextAttribute to ensure that all new text that gets printed matches the attribute we used to color the entire console screen.
For brevity I have left off the error check for the calls to FillConsoleOutputAttribute and SetConsoleTextAttribute. I put a stub for the error handling for GetConsoleScreenBufferInfo . I leave it as an exercise for the original poster to add appropriate error handling if they so choose.
SetConsoleTextAttribute changes the attribute for new characters that you write to the console, but doesn't affect existing contents of the console.
If you want to change the attributes for existing characters already being displayed on the console, use WriteConsoleOutputAttribute instead.
Im trying to use perspectiveTransform but I keep getting error. I tried to follow the solution from this thread http://answers.opencv.org/question/18252/opencv-assertion-failed-for-perspective-transform/
_players[i].getCoordinates() is of type Point
_homography_matrix is a 3 x 3 Mat
Mat temp_Mat = Mat::zeros(2, 1, CV_32FC2);
for (int i = 0; i < _players.size(); i++)
{
cout << Mat(_players[i].get_Coordinates()) << endl;
perspectiveTransform(Mat(_players[i].get_Coordinates()), temp_Mat, _homography_matrix);
}
Also, how do I convert temp_Mat into type Point ?
OpenCV Error: Assertion failed (scn + 1 == m.cols) in cv::perspectiveTransform
Basically you just need to correct from
Mat(_players[i].get_Coordinates()) ...
to
Mat2f(_players[i].get_Coordinates()) ...
In the first case you are creating a 2x1, 1 channel float matrix, in the second case (correct) you create a 1x1, 2 channel float matrix.
You also don't need to initialize temp_Mat.
You can also use template Mat_ to better control the types of your Mats. E.g. creating a Mat of type CV_32FC2 is equivalent to create a Mat2f.
This sample code will show you also how to convert back and forth between Mat and Point:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
// Some random points
vector<Point2f> pts = {Point2f(1,2), Point2f(5,10)};
// Some random transform matrix
Mat1f m(3,3, float(0.1));
for (int i = 0; i < pts.size(); ++i)
{
cout << "Point: " << pts[i] << endl;
Mat2f dst;
perspectiveTransform(Mat2f(pts[i]), dst, m);
cout << "Dst mat: " << dst << endl;
Point2f p(dst(0));
cout << "Dst point: " << p << endl;
}
return 0;
}
I have 2 std::vectors:
to first vector, I emplace instance
to second vector, I want to store the address of the instance just emplaced
But it does not work, i.e., the stored address differs from the emplaced instance's address.
If it matters at all, I'm on Linux and using g++ 5.1 and clang 3.6 with -std=c++11.
Here's a working example to illustrate the problem.
#include <iostream>
#include <vector>
struct Foo {
Foo(int a1, int a2) : f1(a1), f2(a2) {}
int f1;
int f2;
};
int main(int, char**) {
std::vector<Foo> vec1;
std::vector<Foo*> vec2;
int num = 10;
for (int i = 0; i < num; ++i) {
vec1.emplace_back(i, i * i);
// I want to store the address of *emplaced* instance...
vec2.push_back(&vec1.back());
}
// same
std::cout << "size 1: " << vec1.size() << std::endl;
std::cout << "size 2: " << vec2.size() << std::endl;
// same for me
std::cout << "back 1: " << &vec1.back() << std::endl;
std::cout << "back 2: " << vec2.back() << std::endl;
// typically differ ?
std::cout << "front 1: " << &vec1.front() << std::endl;
std::cout << "front 2: " << vec2.front() << std::endl;
for (int i = 0; i < num; ++i) {
std::cout << i + 1 << "th" << std::endl;
// same for last several (size % 4) for me
std::cout << "1: " << &vec1[i] << std::endl;
std::cout << "2: " << vec2[i] << std::endl;
}
}
Questions
Is it correct behavior ? I guess it's caused by storing the address of temporary instance but I want to know whether it's permitted by the standard (just curious).
If above is true, how to work around this ? I resolved this by changing first one to vector<unique_ptr<Foo>> but is there any idiomatic way ?
Two options:
1) You can simply fix your test. You just need in you test preallocate enough memory first with
vec1.reserve(10);
Well, this is implementation details for std::vector. As more and more items are added to std::vector it needs to get more space for them. And this space must be contigious. So when there is not enough space for a new element std::vector allocates a bigger block of memory, copies existing elements to it, add the new element and finally frees the block of memory that it used before. As a result addresses that you stored in vec2 might become invalid.
However, if you preallocate enough memory for 10 elements then you code is correct.
Or, since reserving memory is sort of tricky thing to do
2) use std::deque since insertion and deletion at either end of a deque never invalidates pointers or references to the rest of the elements (http://en.cppreference.com/w/cpp/container/deque) and forget about the problem with invalidated addresses. So no need to reserve memory.
Using the Eigen C++ library, I have a Matrix3f A, a Vector4f b, and a Vector4f c. I want to create a Matrix4f M out of these. I want the top 3-by-3 corner of M to be A, I want to final column of M to be b, and I want the bottom row of M to be c.
I know how to do this by simply creating a Matrix4f and assigning each element individually. But is there a more elegant solution that Eigen supports?
Does this count as elegant enough?
#include <Eigen/Sparse>
#include <iostream>
using namespace Eigen;
using std::cout;
using std::endl;
int main(int argc, char *argv[])
{
Matrix4f m = Matrix4f::Random();
Matrix3f A = Matrix3f::Constant(0.1);
Vector4f b = Vector4f::Constant(0.2), c = Vector4f::Constant(0.3);
cout << m << endl << endl;
cout << A << endl << endl;
cout << b << endl << endl;
cout << c << endl << endl;
m.block(0, 0, 3, 3) = A;
m.col(3) = b;
m.row(3) = c;
cout << m << endl << endl;
return 0;
}
Note that your question is kinda ambiguous, as the (3,3) position will be determined by the order of assignment between b and c.