SURF error while tracking object - macos

I am trying to find an object in the videos by calling SURF function for each frame ...
this is the SURF Function
{
void Identify_SURF_Frame (Mat img_object , Mat img_scene , CvRect in_box)
{
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 1;
SurfFeatureDetector detector( minHessian , 15 , 3 );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
//FlannBasedMatcher matcher;
BruteForceMatcher < L2 < float > > matcher;
//BFMatcher matcher( cv::NORM_L2SQR , false );
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{
if( matches[i].distance < 4 * min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(2);
obj_corners[0] = cvPoint(0,0);
obj_corners[1] = cvPoint( img_object.cols, 0 );
//obj_corners[2] = cvPoint( img_object.cols, img_object.rows );
//obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(2);
perspectiveTransform( obj_corners, scene_corners, H);
int x1 , x2 , y1 , y2 ;
x1 = scene_corners[0].x + Point2f( img_object.cols, 0).x ;
y1 = scene_corners[0].y + Point2f( img_object.cols, 0).y ;
x2 = scene_corners[0].x + Point2f( img_object.cols, 0).x + in_box.width ;
y2 = scene_corners[0].y + Point2f( img_object.cols, 0).y + in_box.height ;
rectangle(img_matches , cvPoint(x1, y1) , cvPoint(x2, y2) , Scalar( 255, 255, 255), 1 );
// square is the global CvRect to use it in main
square.x = x1 - in_box.width ;
square.y = y1 ;
square.width = in_box.width ;
square.height = in_box.height ;
//-- Show detected matches
imshow( "Good Matches & Object detection", img_matches );
}
}
using this function I am trying to draw fixed size square around the object when I find it
the problem is .... some time I got this error which I do not what it mean .. sometime the program work fine without this error .. when this error happen the program scratch
{
OpenCV Error: Assertion failed (count >= 4) in cvFindHomography, file /Users/seereen2004/Desktop/OpenCV-2.4.3/modules/calib3d/src/fundam.cpp, line 235
terminate called after throwing an instance of 'cv::Exception'
what(): /Users/seereen2004/Desktop/OpenCV-2.4.3/modules/calib3d/src/fundam.cpp:235: error: (-215) count >= 4 in function cvFindHomography
Program received signal: “SIGABRT”.
sharedlibrary apply-load-rules all
}
any explanation please ?
Thanks in Advance

Something like that may be when you have too few good_matches (<=4). You need skip this frames.

Related

RPG style sprite movement with SDL2 using Crystal

I'm using SDL2 with Crystal to make a 16bit RPG style tile-based game. I've seen this question asked a ton, but even with all the answers I've come across, I'm still not getting the movement I'm looking for. Have you ever played Final Fantasy IV, V, or VI on the SNES? I'm looking for movement like that. No diagonal, character is always over a tile, and never stops between 2 tiles.
# main game loop
loop do
ticks = Time.monotonic.milliseconds / 1000.0
case event = SDL::Event.poll
when SDL::Event::Keyboard
case event.sym
when .right?
character.move_right(ticks)
end
end
character.draw(renderer)
renderer.present
#other code handling break and stuff omitted
end
# character.cr
VELOCITY = 100
def move_right(delta_ticks)
#direction_facing = "east"
#x += VELOCITY * delta_ticks
end
def draw(renderer)
sprite = #directions[#direction_facing]
renderer.copy(sprite, dstrect: SDL::Rect[#x.to_i, #y.to_i, 64, 64])
end
The way my current movement works, the character starts walking slow, then picks up speed then drops back down to walking slow like it's shifting gears or something. I know my line #x += VELOCITY * delta_ticks is wrong, but I wasn't able to find one that worked how I wanted. This also doesn't take in to account stopping directly over a tile (in this case 64x64).
EDIT: I've tried to transpose the suggestion #genpfault gave. It still doesn't do what I want, but since I don't know C++, I may have missed some stuff. That code update is here
Make a little "tasklet" helper (I know zero about Crystal; in C++ I'd just have this be a class/struct with member data & functions) that encapsulates the character's current tile x/y position (and fine, sub-tile x/y position)
When you handle the left/right/up/down input, check if a current tasklet is still doing its thing; if not, make a new tasklet with the desired direction
Each frame while a tasklet is active, process it: increment/decrement (1px/frame? up to you) the character's fine x/y position until it hits the goal tile position; if the tasklet hits the goal position this frame, remove it (and update the character's tile position)
This way you prevent new input from interfering with character motion while it's in progress, as well as smoothly animating tile transitions.
Something like this:
#include <SDL2/SDL.h>
#include <memory>
struct Character
{
int m_TileX;
int m_TileY;
int m_FineX; // in 16ths of a tile
int m_FineY; // in 16ths of a tile
};
class ITask
{
public:
virtual ~ITask() {};
// override & return true to indicate this task is done
virtual bool Run() = 0;
};
class CharacterAnimator : public ITask
{
public:
CharacterAnimator( Character& c, int dx, int dy )
: m_C( c )
, m_Dx( dx )
, m_Dy( dy )
{}
~CharacterAnimator() override {}
bool Run() override
{
m_C.m_FineX += m_Dx;
m_C.m_FineY += m_Dy;
bool done = false;
if( m_C.m_FineX <= -16 ) { m_C.m_TileX--; m_C.m_FineX = 0; done = true; }
if( m_C.m_FineY <= -16 ) { m_C.m_TileY--; m_C.m_FineY = 0; done = true; }
if( m_C.m_FineX >= 16 ) { m_C.m_TileX++; m_C.m_FineX = 0; done = true; }
if( m_C.m_FineY >= 16 ) { m_C.m_TileY++; m_C.m_FineY = 0; done = true; }
return done;
}
private:
Character& m_C;
int m_Dx;
int m_Dy;
};
int main( int argc, char** argv )
{
SDL_Init( SDL_INIT_EVERYTHING );
SDL_Window * window = SDL_CreateWindow
(
"SDL2",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
640, 480,
SDL_WINDOW_SHOWN
);
SDL_Renderer* renderer = SDL_CreateRenderer
(
window,
0,
SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC
);
SDL_RenderSetLogicalSize( renderer, 320, 240 );
Character c;
c.m_TileX = 9;
c.m_TileY = 7;
c.m_FineX = 0;
c.m_FineY = 0;
std::unique_ptr< ITask > movementTask;
bool running = true;
while( running )
{
if( movementTask && movementTask->Run() )
{
movementTask.reset();
}
SDL_Event ev;
while( SDL_PollEvent( &ev ) )
{
if ( ev.type == SDL_QUIT )
running = false;
if( ev.type == SDL_KEYUP && ev.key.keysym.sym == SDLK_ESCAPE )
running = false;
if( ev.type == SDL_KEYDOWN && ev.key.keysym.sym == SDLK_UP && !movementTask )
movementTask = std::unique_ptr< ITask >( new CharacterAnimator( c, 0, -1 ) );
if( ev.type == SDL_KEYDOWN && ev.key.keysym.sym == SDLK_DOWN && !movementTask )
movementTask = std::unique_ptr< ITask >( new CharacterAnimator( c, 0, 1 ) );
if( ev.type == SDL_KEYDOWN && ev.key.keysym.sym == SDLK_LEFT && !movementTask )
movementTask = std::unique_ptr< ITask >( new CharacterAnimator( c, -1, 0 ) );
if( ev.type == SDL_KEYDOWN && ev.key.keysym.sym == SDLK_RIGHT && !movementTask )
movementTask = std::unique_ptr< ITask >( new CharacterAnimator( c, 1, 0 ) );
}
SDL_SetRenderDrawColor( renderer, 0, 0, 0, 255 );
SDL_RenderClear( renderer );
// draw character
SDL_SetRenderDrawColor( renderer, 255, 0, 0, 255 );
SDL_Rect r =
{
c.m_TileX * 16 + c.m_FineX,
c.m_TileY * 16 + c.m_FineY,
16,
16
};
SDL_RenderFillRect( renderer, &r );
SDL_RenderPresent( renderer );
}
SDL_DestroyRenderer( renderer );
SDL_DestroyWindow( window );
SDL_Quit();
return 0;
}

How can I read/transform the range images of the stanford bunny .ply-files?

I want to read the not reconstructed data from the Stanford Bunny. The point data is stored as several range images, which have to be transformed to be combined to one big point cloud, like written in the README:
These data files were obtained with a Cyberware 3030MS optical
triangulation scanner. They are stored as range images in the "ply"
format. The ".conf" file contains the transformations required to
bring each range image into a single coordinate system.
This is the .conf-file:
camera -0.0172 -0.0936 -0.734 -0.0461723 0.970603 -0.235889 0.0124573
bmesh bun000.ply 0 0 0 0 0 0 1
bmesh bun045.ply -0.0520211 -0.000383981 -0.0109223 0.00548449 -0.294635 -0.0038555 0.955586
bmesh bun090.ply 2.20761e-05 -3.34606e-05 -7.20881e-05 0.000335889 -0.708202 0.000602459 0.706009
bmesh bun180.ply 0.000116991 2.47732e-05 -4.6283e-05 -0.00215148 0.999996 -0.0015001 0.000892527
bmesh bun270.ply 0.000130273 1.58623e-05 0.000406764 0.000462632 0.707006 -0.00333301 0.7072
bmesh top2.ply -0.0530127 0.138516 0.0990356 0.908911 -0.0569874 0.154429 0.383126
bmesh top3.ply -0.0277373 0.0583887 -0.0796939 0.0598923 0.670467 0.68082 -0.28874
bmesh bun315.ply -0.00646017 -1.36122e-05 -0.0129064 0.00449209 0.38422 -0.00976512 0.923179
bmesh chin.ply 0.00435102 0.0882863 -0.108853 -0.441019 0.213083 0.00705734 0.871807
bmesh ear_back.ply -0.0829384 0.0353082 0.0711536 0.111743 0.925689 -0.215443 -0.290169
For each range image seven values are stored. But I do not know, what information can be obtained from these values.
I guess that three of them will contain some information about the translation and maybe three contain information about the rotation. But I didn't find something about the order of these values and how to transform the values to get one point cloud.
The wiki page doesn't handle with range images and I found nothing more at the Stanford pages. They just talk about, that the method of Turk94 is used to scan this data set, but the method has no information about the transformations needed. (Or I was not able to get the information out of this paper.)
Does anybody know how to read these values correctly? Why is there a transformation for the camera position? Is this just a good initial value to view the whole point cloud?
Thanks for your help.
EDIT:
Ok. At this point, I already tried to read the data and to correctly transform them, but everything did not work. I use the boost library to handle with the quaternions
Here is my code for it:
boost::math::quaternion<double> translation, quaternionRotation;
//Get Transformation
translation = boost::math::quaternion<double>(0.0, lineData[2].toDouble(), lineData[3].toDouble(), lineData[4].toDouble());
quaternionRotation = boost::math::quaternion<double>(lineData[5].toDouble(),lineData[6].toDouble(),lineData[7].toDouble(),lineData[8].toDouble());
//do some file related stuff
//...
//for each line: read the point data and transform it and store the point in a data array
pointData[j].x = stringPointData[0].toDouble();
pointData[j].y = stringPointData[1].toDouble();
pointData[j].z = stringPointData[2].toDouble();
tmpQuat = boost::math::quaternion<double> (0.0,pointData[j].x,pointData[j].y,pointData[j].z);
//first translation
tmpQuat += translation;
//then quaternion rotation
tmpQuat = (quaternionRotation * (tmpQuat) * boost::math::conj(quaternionRotation));
//read the data from quaternion to a usual type
pointData[j].x = tmpQuat.R_component_2();
pointData[j].y = tmpQuat.R_component_3();
pointData[j].z = tmpQuat.R_component_4();
I assume that the first component of the quaternion is the w component and the others refers to x, y andz like in equation 2 from here. If necessary I can provide the screenshots of the false transformations.
EDIT: It is written in the source code of zipper in the file zipper.c, that the 7 values are saved as followed:
transX transY transZ quatX quatY quatZ quatW
The quaternion is then transformed into a rotation matrix and then the rotation is performed with this new matrix. But even with this information, I am not able to transform it correctly. To test it, I implemented the function quat_to_mat() from zipper in my project:
glm::dmat4 cPlyObjectLoader::quat_to_mat(boost::math::quaternion<double> quat) const
{
float s;
float xs,ys,zs;
float wx,wy,wz;
float xx,xy,xz;
float yy,yz,zz;
glm::dmat4 mat(1.0);
s = 2 / (quat.R_component_2()*quat.R_component_2() +
quat.R_component_3()*quat.R_component_3() +
quat.R_component_4()*quat.R_component_4() +
quat.R_component_1()*quat.R_component_1());
xs = quat.R_component_2() * s;
ys = quat.R_component_3() * s;
zs = quat.R_component_4() * s;
wx = quat.R_component_1() * xs;
wy = quat.R_component_1() * ys;
wz = quat.R_component_1() * zs;
xx = quat.R_component_2() * xs;
xy = quat.R_component_2() * ys;
xz = quat.R_component_2() * zs;
yy = quat.R_component_3() * ys;
yz = quat.R_component_3() * zs;
zz = quat.R_component_4() * zs;
mat[0][0] = 1 - (yy + zz);
mat[0][1] = xy - wz;
mat[0][2] = xz + wy;
mat[0][3] = 0;
mat[1][0] = xy + wz;
mat[1][1] = 1 - (xx + zz);
mat[1][2] = yz - wx;
mat[1][3] = 0;
mat[2][0] = xz - wy;
mat[2][1] = yz + wx;
mat[2][2] = 1 - (xx + yy);
mat[2][3] = 0;
mat[3][0] = 0;
mat[3][1] = 0;
mat[3][2] = 0;
mat[3][3] = 1;
return mat;
}
Now I am doing the translation and rotation with a vector and this matrix:
quaternionRotation = boost::math::quaternion<double>(lineData[8].toDouble(),lineData[5].toDouble(),lineData[6].toDouble(),lineData[7].toDouble());
rotationMat = this->quat_to_mat(quaternionRotation);
translationVec = glm::dvec4(lineData[2].toDouble(), lineData[3].toDouble(), lineData[4].toDouble(),0.0);
//same stuff as above
//...
glm::dvec4 curPoint = glm::dvec4(pointData[j].x,pointData[j].y,pointData[j].z,1.0);
curPoint += translationVec;
curPoint = rotationMat*curPoint;
The result is different to my quaternion rotation (Why? It should be the same.), but not correct.
Debug information:
the input of all transformations is correct
the input of all points is correct
As i read from stanford 3d scan
For all the Stanford models, alignment was done using a modified ICP
algorithm, as described in this paper. These alignments are stored in
".conf" files, which list each range image in the model along with a
translation and a quaternion rotation.
Here is the link to "this paper"
Edit: The two methods are called zippering and volmetric merging
As Ello mentioned, it is written at the stanford 3D repo:
For all the Stanford models, alignment was done using a modified ICP algorithm, as described in this paper. These alignments are stored in ".conf" files, which list each range image in the model along with a translation and a quaternion rotation.
But that is not enough to understand everything of this data file.
It is correct, that the first line:
camera -0.0172 -0.0936 -0.734 -0.0461723 0.970603 -0.235889 0.0124573
stores a good initial camera position and every other line starting with bmesh refers to a .ply-file, which stores a ranged image.
The transformation values are stored as followed:
transX transY transZ quatX quatY quatZ quatW
where trans... refers to a translation value and quat... refers to a value of the quaternion. Currently, I do not know, why it doesn't work with the quaternion rotation by itself, but by transforming it into a rotation matrix with the code of zipper the transformation is correct. Be aware, that the translation is stored first, but to get a correct transformation the rotation has to be done at the beginning and the translation afterwards.
My code snippet to read the files and transform it, is the following:
boost::math::quaternion<double> translation, quaternionRotation;
//Get Transformation
translationVec = glm::dvec4(lineData[2].toDouble(), lineData[3].toDouble(), lineData[4].toDouble(),0.0);
quaternionRotation = boost::math::quaternion<double>(lineData[8].toDouble(),lineData[5].toDouble(),lineData[6].toDouble(),lineData[7].toDouble());
//calculate the unit quaternion
double magnitude = std::sqrt(
quaternionRotation.R_component_1()*quaternionRotation.R_component_1()+
quaternionRotation.R_component_2()*quaternionRotation.R_component_2()+
quaternionRotation.R_component_3()*quaternionRotation.R_component_3()+
quaternionRotation.R_component_4()*quaternionRotation.R_component_4());
quaternionRotation /= magnitude;
rotationMat = this->quat_to_mat(quaternionRotation);
//do some file related stuff
//...
//for each line: read the point data and transform it and store the point in a data array
pointData[j].x = stringPointData[0].toDouble();
pointData[j].y = stringPointData[1].toDouble();
pointData[j].z = stringPointData[2].toDouble();
//transform the curren point
glm::dvec4 curPoint = glm::dvec4(pointData[j].x,pointData[j].y,pointData[j].z,1.0);
//first rotation
curPoint = rotationMat*curPoint;
//then translation
curPoint += translationVec;
//store the data in a data array
pointData[j].x = curPoint.x;
pointData[j].y = curPoint.y;
pointData[j].z = curPoint.z;
I know, that it's not the best one, but it works. Feel free to optimize it by yourself.
Here is the file converter that I wrote. It will assemble all the scans into a single file, one point per line. It supports different file formats (including Stanford .conf files).
#include <string>
#include <vector>
#include <sstream>
#include <iostream>
#include <stdio.h>
#include <ctype.h>
#include <string.h>
#include <stdlib.h>
#include <math.h>
#ifndef M_PI
#define M_PI 3.14159265
#endif
class LineInput {
public:
LineInput(const std::string& filename) {
F_ = fopen(filename.c_str(), "r" ) ;
ok_ = (F_ != 0) ;
}
~LineInput() {
if(F_ != 0) {
fclose(F_); F_ = 0 ;
}
}
bool OK() const { return ok_ ; }
bool eof() const { return feof(F_) ; }
bool get_line() {
line_[0] = '\0' ;
// Skip the empty lines
while(!isprint(line_[0])) {
if(fgets(line_, MAX_LINE_LEN, F_) == 0) {
return false ;
}
}
// If the line ends with a backslash, append
// the next line to the current line.
bool check_multiline = true ;
int total_length = MAX_LINE_LEN ;
char* ptr = line_ ;
while(check_multiline) {
int L = strlen(ptr) ;
total_length -= L ;
ptr = ptr + L - 2;
if(*ptr == '\\' && total_length > 0) {
*ptr = ' ' ;
ptr++ ;
fgets(ptr, total_length, F_) ;
} else {
check_multiline = false ;
}
}
if(total_length < 0) {
std::cerr
<< "MultiLine longer than "
<< MAX_LINE_LEN << " bytes" << std::endl ;
}
return true ;
}
int nb_fields() const { return field_.size() ; }
char* field(int i) { return field_[i] ; }
int field_as_int(int i) {
int result ;
ok_ = ok_ && (sscanf(field(i), "%d", &result) == 1) ;
return result ;
}
double field_as_double(int i) {
double result ;
ok_ = ok_ && (sscanf(field(i), "%lf", &result) == 1) ;
return result ;
}
bool field_matches(int i, const char* s) {
return !strcmp(field(i), s) ;
}
void get_fields(const char* separators=" \t\r\n") {
field_.resize(0) ;
char* tok = strtok(line_,separators) ;
while(tok != 0) {
field_.push_back(tok) ;
tok = strtok(0,separators) ;
}
}
private:
enum { MAX_LINE_LEN = 65535 } ;
FILE* F_ ;
char line_[MAX_LINE_LEN] ;
std::vector<char*> field_ ;
bool ok_ ;
} ;
std::string to_string(int x, int mindigits) {
char buff[100] ;
sprintf(buff, "%03d", x) ;
return std::string(buff) ;
}
double M[4][4] ;
void transform(double* xyz) {
double xyzw[4] ;
for(unsigned int c=0; c<4; c++) {
xyzw[c] = M[3][c] ;
}
for(unsigned int j=0; j<4; j++) {
for(unsigned int i=0; i<3; i++) {
xyzw[j] += M[i][j] * xyz[i] ;
}
}
for(unsigned int c=0; c<3; c++) {
xyz[c] = xyzw[c] / xyzw[3] ;
}
}
bool read_frames_file(int no) {
std::string filename = "scan" + to_string(no,3) + ".frames" ;
std::cerr << "Reading frames from:" << filename << std::endl ;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << " ... not found" << std::endl ;
return false ;
}
while(!in.eof() && in.get_line()) {
in.get_fields() ;
if(in.nb_fields() == 17) {
int f = 0 ;
for(unsigned int i=0; i<4; i++) {
for(unsigned int j=0; j<4; j++) {
M[i][j] = in.field_as_double(f) ; f++ ;
}
}
}
}
return true ;
}
bool read_pose_file(int no) {
std::string filename = "scan" + to_string(no,3) + ".pose" ;
std::cerr << "Reading pose from:" << filename << std::endl ;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << " ... not found" << std::endl ;
return false ;
}
double xyz[3] ;
double euler[3] ;
in.get_line() ;
in.get_fields() ;
xyz[0] = in.field_as_double(0) ;
xyz[1] = in.field_as_double(1) ;
xyz[2] = in.field_as_double(2) ;
in.get_line() ;
in.get_fields() ;
euler[0] = in.field_as_double(0) * M_PI / 180.0 ;
euler[1] = in.field_as_double(1) * M_PI / 180.0 ;
euler[2] = in.field_as_double(2) * M_PI / 180.0 ;
double sx = sin(euler[0]);
double cx = cos(euler[0]);
double sy = sin(euler[1]);
double cy = cos(euler[1]);
double sz = sin(euler[2]);
double cz = cos(euler[2]);
M[0][0] = cy*cz;
M[0][1] = sx*sy*cz + cx*sz;
M[0][2] = -cx*sy*cz + sx*sz;
M[0][3] = 0.0;
M[1][0] = -cy*sz;
M[1][1] = -sx*sy*sz + cx*cz;
M[1][2] = cx*sy*sz + sx*cz;
M[1][3] = 0.0;
M[2][0] = sy;
M[2][1] = -sx*cy;
M[2][2] = cx*cy;
M[2][3] = 0.0;
M[3][0] = xyz[0];
M[3][1] = xyz[1];
M[3][2] = xyz[2];
M[3][3] = 1.0;
return true ;
}
void setup_transform_from_translation_and_quaternion(
double Tx, double Ty, double Tz,
double Qx, double Qy, double Qz, double Qw
) {
/* for unit q, just set s = 2 or set xs = Qx + Qx, etc. */
double s = 2.0 / (Qx*Qx + Qy*Qy + Qz*Qz + Qw*Qw);
double xs = Qx * s;
double ys = Qy * s;
double zs = Qz * s;
double wx = Qw * xs;
double wy = Qw * ys;
double wz = Qw * zs;
double xx = Qx * xs;
double xy = Qx * ys;
double xz = Qx * zs;
double yy = Qy * ys;
double yz = Qy * zs;
double zz = Qz * zs;
M[0][0] = 1.0 - (yy + zz);
M[0][1] = xy - wz;
M[0][2] = xz + wy;
M[0][3] = 0.0;
M[1][0] = xy + wz;
M[1][1] = 1 - (xx + zz);
M[1][2] = yz - wx;
M[1][3] = 0.0;
M[2][0] = xz - wy;
M[2][1] = yz + wx;
M[2][2] = 1 - (xx + yy);
M[2][3] = 0.0;
M[3][0] = Tx;
M[3][1] = Ty;
M[3][2] = Tz;
M[3][3] = 1.0;
}
bool read_points_file(int no) {
std::string filename = "scan" + to_string(no,3) + ".3d" ;
std::cerr << "Reading points from:" << filename << std::endl ;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << " ... not found" << std::endl ;
return false ;
}
while(!in.eof() && in.get_line()) {
in.get_fields() ;
double xyz[3] ;
if(in.nb_fields() >= 3) {
for(unsigned int c=0; c<3; c++) {
xyz[c] = in.field_as_double(c) ;
}
transform(xyz) ;
printf("%f %f %f\n",xyz[0],xyz[1],xyz[2]) ;
}
}
return true ;
}
/* only works for ASCII PLY files */
void read_ply_file(char* filename) {
std::cerr << "Reading points from:" << filename << std::endl;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << filename << ": could not open" << std::endl ;
return;
}
bool reading_vertices = false;
int nb_vertices = 0 ;
int nb_read_vertices = 0 ;
while(!in.eof() && in.get_line()) {
in.get_fields();
if(reading_vertices) {
double xyz[3] ;
for(unsigned int c=0; c<3; c++) {
xyz[c] = in.field_as_double(c) ;
}
transform(xyz) ;
printf("%f %f %f\n",xyz[0],xyz[1],xyz[2]) ;
++nb_read_vertices;
if(nb_read_vertices == nb_vertices) {
return;
}
} else if(
in.field_matches(0,"element") &&
in.field_matches(1,"vertex")
) {
nb_vertices = in.field_as_int(2);
} else if(in.field_matches(0,"end_header")) {
reading_vertices = true;
}
}
}
/* For Stanford scanning repository */
void read_conf_file(char* filename) {
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << filename << ": could not open" << std::endl ;
return;
}
while(!in.eof() && in.get_line()) {
in.get_fields();
if(in.nb_fields() == 0) { continue ; }
if(in.field_matches(0,"bmesh")) {
char* filename = in.field(1);
// Translation vector
double Tx = in.field_as_double(2);
double Ty = in.field_as_double(3);
double Tz = in.field_as_double(4);
/// Quaternion
double Qx = in.field_as_double(5);
double Qy = in.field_as_double(6);
double Qz = in.field_as_double(7);
double Qw = in.field_as_double(8);
setup_transform_from_translation_and_quaternion(Tx,Ty,Tz,Qx,Qy,Qz,Qw);
read_ply_file(filename);
}
}
}
int main(int argc, char** argv) {
if(argc != 2) { return -1 ; }
if(strstr(argv[1],".conf")) {
read_conf_file(argv[1]);
} else {
int max_i = atoi(argv[1]) ;
for(int i=0; i<=max_i; i++) {
if(!read_frames_file(i)) {
read_pose_file(i) ;
}
read_points_file(i) ;
}
}
return 0 ;
}
Okay so here is my solution since none of the above worked for me (note this is in python using blender's bpy). It seems that I need to transpose the rotation part of my 4x4 transformation matrix (note I am using a standard way to convert quaternion to rotation matrix and not the one from zipper). Also note since I am using blender when importing or using any model it only stores the models local coordinates relative to the objects world transformation so you do not have to do this point = objWorld * point, it is blender specific.
#loop
for meshName, transform in zip(plyFile, transformations):
#Build Quaternion
#transform structure [x, y, z, qx, qy, qz, qw]
Rt = mathutils.Quaternion((transform[6], transform[3], transform[4], transform[5])).to_matrix().to_4x4()
Rt.normalize()
Rt.transpose()
Rt[0][3] = transform[0]
Rt[1][3] = transform[1]
Rt[2][3] = transform[2]
bpy.ops.object.select_all(action='DESELECT')
#import the ply mesh into blender
bpy.ops.import_mesh.ply(filepath=baseDir + meshName)
#get the ply object
obj = bpy.context.object
#get objects world matrix
objWorld = obj.matrix_world
for index in range(len(obj.data.vertices)):
#get local point
point = mathutils.Vector([obj.data.vertices[index].co[0],obj.data.vertices[index].co[1], obj.data.vertices[index].co[2], 1.])
#convert local point to world
point = objWorld * point
#apply ply transformation
point = Rt * point
#update the point in the mesh
obj.data.vertices[index].co[0] = point[0]
obj.data.vertices[index].co[1] = point[1]
obj.data.vertices[index].co[2] = point[2]
#all vertex positions should be updated correctly
As mentioned in other answers, the Stanford 3D repository gives some info about the data organization in the '.conf' files but, the transformation for the bunny model were not working properly when using the quaternion data provided.
I was also stuck in this registration problem for the bunny model, and based on my tests I have some extra considerations to add up. When applying the transformation - rotations to be more specific - I have realized that quaternion values were not rotating the cloud in the correct direction but, when using the corresponding Euler notation, by changing the sign of one specific axis of rotation, I got the correct registration. So, back to the quaternion notation used in the '.conf' file, after some tests I have noticed that by changing the sign of the 'w' component in the quaternion, in each 'bmesh' row, but the first (bun000.ply), the rotation by quaternion can be used.
Furthermore, for some reason, when registering the dragon (dragon_stand and dragon_side) and armadillo (armadillo_stand) stanford point clouds, in order to get the correct result I had to use a different sequence for reading the quaternion data in the ‘.conf’ file. It seems to be stored as:
tx ty tz qw qx qy qz
where 't' refers to a translation value and 'q' refers to a quaternion value. Just to be clear, I have just tested these three models, therefore, I don’t know what is the default pattern for the quaternion values. Besides, for these last two point cloud models, I did not need to change the '.conf' file.
I hope this could be useful for someone else trying to do the same
Just in case someone is looking for a full python implementation on the basis of what #DanceIgel found out, here is some code in python 3.9.1, also generating a figure in mathplotlib:
# Python 3.9.1
import numpy as np
import sys
import math
import glob
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import open3d as o3d
def get_pointcloud_files(path):
files = list()
for f in glob.glob(path + '/*.ply'):
files.append(f)
return files
def get_pointcloud_from_file(path, filename):
cloud = o3d.io.read_point_cloud(path + '/' + filename)
return cloud
def get_transformations_from_file(path, filename):
with open(path + '/' + filename) as f:
lines = (line for line in f)
source = np.loadtxt(lines, delimiter=' ', skiprows=1, dtype='str')
source = np.delete(source, 0, 1) #remove camera
filenames = source[:,0]
source = source[filenames.argsort()]
filenames = np.sort(filenames)
translations = list()
for row in source[:,1:4]:
translations.append(np.reshape(row, [3,1]).astype(np.float32))
quaternions = list()
for row in source[:,4:]:
quaternions.append(np.reshape(row, [4,1]).astype(np.float32))
return filenames, translations, quaternions
def quaternion_rotation_matrix(Q):
# Extract the values from Q
q0 = Q[3]
q1 = Q[0]
q2 = Q[1]
q3 = Q[2]
# calculate unit quarternion
magnitude = math.sqrt(q0*q0 + q1*q1 + q2*q2 + q3*q3)
q0 = q0 / magnitude
q1 = q1 / magnitude
q2 = q2 / magnitude
q3 = q3 / magnitude
# First row of the rotation matrix
r00 = 2 * (q0 * q0 + q1 * q1) - 1
r01 = 2 * (q1 * q2 - q0 * q3)
r02 = 2 * (q1 * q3 + q0 * q2)
# Second row of the rotation matrix
r10 = 2 * (q1 * q2 + q0 * q3)
r11 = 2 * (q0 * q0 + q2 * q2) - 1
r12 = 2 * (q2 * q3 - q0 * q1)
# Third row of the rotation matrix
r20 = 2 * (q1 * q3 - q0 * q2)
r21 = 2 * (q2 * q3 + q0 * q1)
r22 = 2 * (q0 * q0 + q3 * q3) - 1
# 3x3 rotation matrix
rot_matrix = np.array([[r00, r01, r02],
[r10, r11, r12],
[r20, r21, r22]])
rot_matrix = np.transpose(rot_matrix)
return rot_matrix
if __name__=="__main__": # $python visualization_bunny.py bunny/data
path = sys.argv[1]
# load transformations and filenames from file
filenames, translations, quaternions = get_transformations_from_file(path, 'bun.conf')
curr_transformation = np.zeros([3,4])
clouds = list()
for curr_filename, curr_quaternion, curr_translation in zip(filenames, quaternions, translations): # go through input files
curr_cloud = get_pointcloud_from_file(path, curr_filename)
# convert cloud to numpy
curr_cloud = np.asarray(curr_cloud.points)
# compute rotation matrix from quaternions
curr_rotation_matr = quaternion_rotation_matrix(curr_quaternion)
curr_rotation_matr = np.squeeze(curr_rotation_matr)
curr_translation = np.squeeze(curr_translation)
# create transformation matrix
curr_transformation[:,0:3] = curr_rotation_matr
curr_transformation[:,3] = curr_translation
# transform current cloud
for i in range(curr_cloud.shape[0]):
# apply rotation
curr_point = np.matmul(curr_rotation_matr, np.transpose(curr_cloud[i,:]))
# apply translation
curr_point = curr_point + curr_translation
curr_cloud[i,0] = curr_point[0]
curr_cloud[i,1] = curr_point[1]
curr_cloud[i,2] = curr_point[2]
# add current cloud to list of clouds
clouds.append(curr_cloud)
#plot separate point clouds in same graph
ax = plt.axes(projection='3d')
for cloud in clouds:
ax.plot(cloud[:,0], cloud[:,1], cloud[:,2], 'bo', markersize=0.005)
#ax.view_init(elev=90, azim=270)
ax.view_init(elev=100, azim=270)
plt.axis('off')
plt.savefig("ZZZ_Stanford_Bunny_PointCloud.png", bbox_inches='tight')
plt.show()

improper mandelbrot set output plotting

i am trying to write a code to display Mandelbrot set for the numbers between
(-3,-3) to (2,2) on my terminal.
The main function generates & feeds a complex number to analyze function.
The analyze function returns character "*" for the complex number Z within the set and "." for the numbers which lie outside the set.
The code:
#define MAX_A 2 // upperbound on real
#define MAX_B 2 // upper bound on imaginary
#define MIN_A -3 // lowerbnd on real
#define MIN_B -3 // lower bound on imaginary
#define NX 300 // no. of points along x
#define NY 200 // no. of points along y
#define max_its 50
int analyze(double real,double imag);
void main()
{
double a,b;
int x,x_arr,y,y_arr;
int array[NX][NY];
int res;
for(y=NY-1,x_arr=0;y>=0;y--,x_arr++)
{
for(x=0,y_arr++;x<=NX-1;x++,y_arr++)
{
a= MIN_A+ ( x/( (double)NX-1)*(MAX_A-MIN_A) );
b= MIN_B+ ( y/( (double)NY-1 )*(MAX_B-MIN_B) );
//printf("%f+i%f ",a,b);
res=analyze(a,b);
if(res>49)
array[x][y]=42;
else
array[x][y]=46;
}
// printf("\n");
}
for(y=0;y<NY;y++)
{
for(x=0;x<NX;x++)
printf("%2c",array[x][y]);
printf("\n");
}
}
The analyze function accepts the coordinate on imaginary plane ;
and computes (Z^2)+Z 50 times ; and while computing if the complex number explodes, then function returns immidiately else the function returns after finishing 50 iterations;
int analyze(double real,double imag)
{
int iter=0;
double r=4.0;
while(iter<50)
{
if ( r < ( (real*real) + (imag*imag) ) )
{
return iter;
}
real= ( (real*real) - (imag*imag) + real);
imag= ( (2*real*imag)+ imag);
iter++;
}
return iter;
}
So, i am analyzing 60000 (NX * NY) numbers & displaying it on the terminal
considering 3:2 ratio (300,200) , i even tried 4:3 (NX:NY) , but the output remains same and the generated shape is not even close to the mandlebrot set :
hence, the output appears inverted ,
i browsed & came across lines like:
(x - 400) / ZOOM;
(y - 300) / ZOOM;
on many mandelbrot codes , but i am unable to understand how this line may rectify my output.
i guess i am having trouble in mapping output to the terminal!
(LB_Real,UB_Imag) --- (UB_Real,UB_Imag)
| |
(LB_Real,LB_Imag) --- (UB_Real,LB_Imag)
Any Hint/help will be very useful
The Mandelbrot recurrence is zn+1 = zn2 + c.
Here's your implementation:
real= ( (real*real) - (imag*imag) + real);
imag= ( (2*real*imag)+ imag);
Problem 1. You're updating real to its next value before you've used the old value to compute the new imag.
Problem 2. Assuming you fix problem 1, you're computing zn+1 = zn2 + zn.
Here's how I'd do it using double:
int analyze(double cr, double ci) {
double zr = 0, zi = 0;
int r;
for (r = 0; (r < 50) && (zr*zr + zi*zi < 4.0); ++r) {
double zr1 = zr*zr - zi*zi + cr;
double zi1 = 2 * zr * zi + ci;
zr = zr1;
zi = zi1;
}
return r;
}
But it's easier to understand if you use the standard C99 support for complex numbers:
#include <complex.h>
int analyze(double cr, double ci) {
double complex c = cr + ci * I;
double complex z = 0;
int r;
for (r = 0; (r < 50) && (cabs(z) < 2); ++r) {
z = z * z + c;
}
return r;
}

How to set order few pips above order initiation bar in MQL4

I would like to create a stoploss order that will be placed above the high of the previous order's initiation bar in case this is a Sell order OR below the low of the previous order's initiation bar in case this is a Buy order.
Here is a picture to illustrate the issue ( the example depicts a sell order case ):
Any idea how to do that? The code below works fine if I use stoploss that is fixed. If I replace the stoploss with variables that are based on High or Low no orders are fired.
Here is my code:
//| Expert initialization function |
//+------------------------------------------------------------------+
/* -----------------------------------------------------------------------------
KINDLY RESPECT THIS & DO NOT MODIFY THE EDITS AGAIN
MQL4 FORMAT IS NOT INDENTATION SENSITIVE,
HAS IDE-HIGHLIGHTING
AND
HAS NO OTHER RESTRICTIVE CONDITIONS ----------- THIS CODING-STYLE HELPS A LOT
FOR BOTH
EASY & FAST
TRACKING OF NON-SYNTACTIC ERRORS
AND
IMPROVES FAST ORIENTATION
IN ALGORITHM CONSTRUCTORS' MODs
DURING RAPID PROTOTYPING
IF YOU CANNOT RESIST,
SOLVE RATHER ANY OTHER PROBLEM,
THAT MAY HELP SOMEONE ELSE's POST, THX
------------------------------------------- KINDLY RESPECT
THE AIM OF StackOverflow
------------------------------------------- TO HELP OTHERS DEVELOP UNDERSTANDING,
THEIRS UNDERSTANDING, OK? */
extern int StartHour = 14;
extern int TakeProfit = 70;
extern int StopLoss = 40;
extern double Lots = 0.01;
extern int MA_period = 20;
extern int MA_period_1 = 45;
extern int RSI_period14 = 14;
extern int RSI_period12 = 12;
void OnTick() {
static bool IsFirstTick = true;
static int ticket = 0;
double R_MA = iMA( Symbol(), Period(), MA_period, 0, 0, 0, 1 );
double R_MA_Fast = iMA( Symbol(), Period(), MA_period_1, 0, 0, 0, 1 );
double R_RSI14 = iRSI( Symbol(), Period(), RSI_period14, 0, 0 );
double R_RSI12 = iRSI( Symbol(), Period(), RSI_period12, 0, 0 );
double HH = High[1];
double LL = Low[ 1];
if ( Hour() == StartHour ) {
if ( IsFirstTick == true ) {
IsFirstTick = false;
bool res1 = OrderSelect( ticket, SELECT_BY_TICKET );
if ( res1 == true ) {
if ( OrderCloseTime() == 0 ) {
bool res2 = OrderClose( ticket, Lots, OrderClosePrice(), 10 );
if ( res2 == false ) {
Alert( "Error closing order # ", ticket );
}
}
}
if ( High[1] < R_MA
&& R_RSI12 > R_RSI14
&& R_MA_Fast >= R_MA
){
ticket = OrderSend( Symbol(),
OP_BUY,
Lots,
Ask,
10,
Bid - LL * Point * 10,
Bid + TakeProfit * Point * 10,
"Set by SimpleSystem"
);
}
if ( ticket < 0 ) {
Alert( "Error Sending Order!" );
}
else {
if ( High[1] > R_MA
&& R_RSI12 > R_RSI14
&& R_MA_Fast <= R_MA
){
ticket = OrderSend( Symbol(),
OP_SELL,
Lots,
Bid,
10,
Ask + HH * Point * 10,
Ask - TakeProfit * Point * 10,
"Set by SimpleSystem"
);
}
if ( ticket < 0 ) {
Alert( "Error Sending Order!" );
}
}
}
}
else {
IsFirstTick = true;
}
}
Major issue
Once having assigned ( per each Market Event Quote Arrival )
double HH = High[1],
LL = Low[ 1];
Your instruction to OP_SELL shall be repaired:
ticket = OrderSend( Symbol(),
OP_SELL,
Lots,
Bid,
10,
// ----------------------v--------------------------------------
// Ask + HH * 10 * Point,
// intention was High[1] + 10 [PT]s ( if Broker allows ), right?
NormalizeDouble( HH + 10 * Point,
Digits // ALWAYS NORMALIZE FOR .XTO-s
),
// vvv----------------------------------------------------------
// Ask - TakeProfit * Point * 10, // SAFER TO BASE ON BreakEvenPT
NormalizeDouble( Ask
- TakeProfit * Point * 10,
Digits // ALWAYS NORMALIZE FOR .XTO-s
),
"Set by SimpleSystem"
);
Symmetrically review and modify the OP_BUY case.
For Broker T&C collisions ( these need not get reflected in backtest ) review:
MarketInfo( _Symbol, MODE_STOPLEVEL )
MarketInfo( _Symbol, MODE_FREEZELEVEL )
or inspect in the MT4.Terminal in the MarketWatch aMouseRightClick Symbols -> Properties for STOPLEVEL distance.
Minor Issue
Review also your code for OrderClose() -- this will fail due to having wrong Price:
// ---------------------------------------------vvvvv----------------------------
bool res2 = OrderClose( ticket, Lots, OrderClosePrice(), 10 ); # was db.POOL()-SELECT'd

Estimating an Affine Transform between Two Images

I have a sample image:
I apply the affine transform with the following warp matrix:
[[ 1.25 0. -128 ]
[ 0. 2. -192 ]]
and crop a 128x128 part from the result to get an output image:
Now, I want to estimate the warp matrix and crop size/location from just comparing the sample and output image. I detect feature points using SURF, and match them by brute force:
There are many matches, of which I'm keeping the best three (by distance), since that is the number required to estimate the affine transform. I then use those 3 keypoints to estimate the affine transform using getAffineTransform. However, the transform it returns is completely wrong:
-0.00 1.87 -6959230028596648489132997794229911552.00
0.00 -1.76 -0.00
What am I doing wrong? Source code is below.
Perform affine transform (Python):
"""Apply an affine transform to an image."""
import cv
import sys
import numpy as np
if len(sys.argv) != 10:
print "usage: %s in.png out.png x1 y1 width height sx sy flip" % __file__
sys.exit(-1)
source = cv.LoadImage(sys.argv[1])
x1, y1, width, height, sx, sy, flip = map(float, sys.argv[3:])
X, Y = cv.GetSize(source)
Xn, Yn = int(sx*(X-1)), int(sy*(Y-1))
if flip:
arr = np.array([[-sx, 0, sx*(X-1)-x1], [0, sy, -y1]])
else:
arr = np.array([[sx, 0, -x1], [0, sy, -y1]])
print arr
warp = cv.fromarray(arr)
cv.ShowImage("source", source)
dest = cv.CreateImage((Xn, Yn), source.depth, source.nChannels)
cv.WarpAffine(source, dest, warp)
cv.SetImageROI(dest, (0, 0, int(width), int(height)))
cv.ShowImage("dest", dest)
cv.SaveImage(sys.argv[2], dest)
cv.WaitKey(0)
Estimate affine transform from two images (C++):
#include <stdio.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <algorithm>
using namespace cv;
void readme();
bool cmpfun(DMatch a, DMatch b) { return a.distance < b.distance; }
/** #function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{
return -1;
}
Mat img_1 = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{
return -1;
}
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L2, false);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0;
double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
sort(matches.begin(), matches.end(), cmpfun);
std::vector< DMatch > good_matches;
vector<Point2f> match1, match2;
for (int i = 0; i < 3; ++i)
{
good_matches.push_back( matches[i]);
Point2f pt1 = keypoints_1[matches[i].queryIdx].pt;
Point2f pt2 = keypoints_2[matches[i].trainIdx].pt;
match1.push_back(pt1);
match2.push_back(pt2);
printf("%3d pt1: (%.2f, %.2f) pt2: (%.2f, %.2f)\n", i, pt1.x, pt1.y, pt2.x, pt2.y);
}
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, good_matches, img_matches,
Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//-- Show detected matches
imshow("Matches", img_matches );
imwrite("matches.png", img_matches);
waitKey(0);
Mat fun = getAffineTransform(match1, match2);
for (int i = 0; i < fun.rows; ++i)
{
for (int j = 0; j < fun.cols; j++)
{
printf("%.2f ", fun.at<float>(i,j));
}
printf("\n");
}
return 0;
}
/** #function readme */
void readme()
{
std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl;
}
The cv::Mat getAffineTransform returns is made of doubles, not of floats. The matrix you get probably is fine, you just have to change the printf command in your loops to
printf("%.2f ", fun.at<double>(i,j));
or even easier: Replace this manual output with
std::cout << fun << std::endl;
It's shorter and you don't have to care about data types yourself.

Resources