what i wanna do is to get 4 vertice pixel points(2D coordinate) of QR-code,
and input both them and World-3D-coordination of QR-code as parameter of function, solvePnP.
but when i compile, solvePnP doesn't work! the error occurred something like this..
Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints,checkVector(2, CV_64F))) in cv::solvePnP
in solvePnP, it declared that it can use std::Vector type, or cv::Mat type, so i tried to change both of those date types. but it still can't..
my source code is below,
***Point3d pt[4];
pt[0] = Point3d(0, 0, 0);
pt[1] = Point3d(0, 178, 0);
pt[2] = Point3d(178, 178, 0);
pt[3] = Point3d(178, 0, 0);
vector<Point3f> objectPoints;
for (int i = 0; i < 4; i++)
objectPoints.push_back(pt[i]); // 3d world coordinates
Point2d point[4];***
and after this procedure, i got the 4 vertices coordinates into point[] from QR code. and next is,
vector<Point2f> imagePoints;
for (int i = 0; i < 4; i++)
imagePoints.push_back(point[i]); // 2d image coordinates
//Mat objPts(4, 1, CV_64F, pt);
//Mat imgPts(4, 1, CV_64F, point);
// camera parameters
double Intrinsic[] = { fx, 0, cx, 0, fy, cy, 0, 0, 1 };
Mat Camera_Matrix(3, 3, CV_64FC1, Intrinsic);
double Distort[] = { k1, k2, p1, p2 };
Mat DistortCoeffs(4, 1, CV_64FC1, Distort);
// estimate camera pose
Mat rvec, tvec; // rotation & translation vectors
solvePnP(objectPoints, imagePoints, Camera_Matrix, DistortCoeffs, rvec, tvec);
please help!
In your code, the array "point/pt" is of Point2d, but "objectPoints/imagePoints" is a vector of Point2f.
By the way, different to the documentation, it seems that the solvePnP function requires the object points and image points to be in the format of vector or cv::Mat(N*2/3). I tried using cv::Mat(2/3*N) as input, but the same error of assertion failure appears.
You may follow the official example to help debug. It locates in /samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src.
Related
Some time ago, I coded a little fidgetable logo based on CSS transforms alone.
You can fiddle with it over https://document.paris/
The result feels nice, it feels natural to click/touch and drag to rotate the logo.
I remember banging my head against the walls until I found out that I could chain CSS transforms quite easily just by chaining them.
transform: matrix3d(currentMatrix) rotate3d(x, y, z, angle);
And most importantly to get the currentMatrix I would simply do m = $('#logobackground').css('transform'); with jQuery, the browser would magically return the computed matrix instead of the raw "css" which actually avoided me to deal with matrices or to infinitely stack rotate3D() properties.
So the hardest part was then to calculate the rotate3D arguments (x, y, z, angle) based on mouse inputs. In theory shouldn't have problems transposing this part to java so i'll just skip over it.
Now
I'm trying to do the exact same thing with Processing and there is two problems :
There is no rotate3D() in processing.
There is no browser to apply/chain transformations and return me the current matrix state automatically.
Here's the plan/implementation I'm working on :
I need a "currentMatrix" to apply every frame to the scene
PMatrix3D currentMatrix = new PMatrix3D();
In the setup() I set it to the "identity matrix" which from what I understand is equivalent to "no transformation".
// set currentMatrix to identity Matrix
currentMatrix.set(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1);
Every frame I would calculate a transformation matrix and apply it to the currentMatrix.
Then I would apply this matrix to the scene.
// Apply Matrix to the currentMatrix
void mouseRotate() {
float diag = sqrt(pow(width,2)+pow(height,2));
float x = deltaX()/ diag * 10; // deltaX = difference between previous prevous MouseX and current mouseX)
float y = deltaY()/ diag * 10; // deltaY = same with Y axis
float angle = sqrt( pow(x, 2) + pow(y, 2) );
currentMatrix.apply( rotate3D(y,x,0,angle) );
}
// Apply Matrix to the scene
applyMatrix(currentMatrix);
PMatrix3D reference : https://processing.github.io/processing-javadocs/core/processing/core/PMatrix3D.html
ApplyMatrix() reference : https://processing.org/reference/applyMatrix_.html
All I need to do then is to implement the rotate3D css transform as a function which returns a transformation matrix.
Based on what I found on this page https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotate3d()
I implemented this first function :
PMatrix3D rotate3D(float x, float y, float z, float a) {
PMatrix3D rotationMatrix = new PMatrix3D();
rotationMatrix.set(
1+(1-cos(a))*(pow(x,2)-1), z*sin(a)+x*y*(1-cos(a)), -y*sin(a)+x*z*(1-cos(a)), 0,
-z*sin(a)+x*y*(1-cos(a)), 1+(1-cos(a))*(pow(y,2)-1), x*sin(a)+y*z*(1-cos(a)), 0,
y*sin(a)+x*z*(1-cos(a)), -x*sin(a)+y*z*(1-cos(a)), 1+(1-cos(a))*(pow(z,2)-1), 0,
0,0,0,1
);
return rotationMatrix;
}
and based on what I found on this page https://drafts.csswg.org/css-transforms-2/#Rotate3dDefined
I implemented this other function :
PMatrix3D rotate3Dbis(float getX, float getY, float getZ, float getA) {
float sc = sin(getA/2)*cos(getA/2);
float sq = pow(sin(getA/2),2);
float normalizer = sqrt( pow(getX,2) + pow(getY,2) + pow(getZ,2) );
float x = getX/normalizer;
float y = getY/normalizer;
float z = getZ/normalizer;
PMatrix3D rotationMatrix = new PMatrix3D();
rotationMatrix.set(
1-2*(pow(y,2)+pow(z,2))*sq, 2*(x*y*sq-z*sc), 2*(x*z*sq+y*sc), 0,
2*(x*y*sq+z*sc), 1-2*(pow(x,2)+pow(z,2))*sq, 2*(y*z*sq-x*sc), 0,
2*(x*z*sq-y*sc), 2*(y*z*sq+x*sc), 1-2*(pow(x,2)+pow(y,2)*sq), 0,
0, 0, 0, 1
);
return rotationMatrix;
}
When testing, they don't produce exactly the same result with the same inputs (although the differences are kind of "symmetric" which makes me think that they are kind of equivalent at least in some way ?) Also rotate3Dbis() has a tendency to produce NaN numbers, especially when i'm not moving the mouse (x & y = 0).
But most importantly, in the end it doesn't work. Instead of rotating, the drawing just zooms out progressively when I'm using rotate3D(), and rotate3Dbis() doesn't render correctly because of the NaNs.
The overall question :
I'm trying to get guidance from people who understand transformations Matrices and trying to narrow down where the issue is. Are my processing/java implementations of rotate3D() flawed ? Or would the issue come from somewhere else ? And are my rotate3D() and rotate3Dbis functions equivalent ?
You might get away with simply rotating on X and Y axis, as you already mentioned, using the previous and current mouse coordinates:
PVector cameraRotation = new PVector(0, 0);
void setup(){
size(900, 900, P3D);
rectMode(CENTER);
strokeWeight(9);
strokeJoin(MITER);
}
void draw(){
//update "camera" rotation
if (mousePressed){
cameraRotation.x += -float(mouseY-pmouseY);
cameraRotation.y += float(mouseX-pmouseX);
}
background(255);
translate(width * 0.5, height * 0.5, 0);
rotateX(radians(cameraRotation.x));
rotateY(radians(cameraRotation.y));
rect(0, 0, 300, 450);
}
The Document Paris example you've shared also uses easing. You can have a look at this minimal easing Processing example
Here's a version of the above with easing applied:
PVector cameraRotation = new PVector();
PVector cameraTargetRotation = new PVector();
float easing = 0.01;
void setup(){
size(900, 900, P3D);
rectMode(CENTER);
strokeWeight(9);
strokeJoin(MITER);
}
void draw(){
//update "camera" rotation
if (mousePressed){
cameraTargetRotation.x += -float(mouseY-pmouseY);
cameraTargetRotation.y += float(mouseX-pmouseX);
}
background(255);
translate(width * 0.5, height * 0.5, 0);
// ease rotation
rotateX(radians(cameraRotation.x -= (cameraRotation.x - cameraTargetRotation.x) * easing));
rotateY(radians(cameraRotation.y -= (cameraRotation.y - cameraTargetRotation.y) * easing));
fill(255);
rect(0, 0, 300, 450);
fill(0);
translate(0, 0, 3);
rect(0, 0, 300, 450);
}
Additionally there's a library called PeasyCam which can make this much simpler.
If you do want to implement your own version using PMatrix3D here are a couple of tips that could save you time:
When you instantiate PMatrix3D() it's the identity matrix. If you have transformations applied and you want to reset() to identity.
If you want to rotate a PMatrix3D() around and axis the rotate(float angleInRadians, float axisX, float axisY, float axisZ) override should help.
Additionally you could get away without PMatrix3D since resetMatrix() will reset the global transformation matrix and you can call rotate(float angleInRadians, float axisX, float axisY, float axisZ) directly.
Part of the answer is a fix added to the first rotate3D function.
I needed to normalize the x,y,z values to avoid the weird scaling.
I'm posting the current state of the code (i'm skipping a few parts for the sake of simplicity):
// Mouse movement since last fame on X axis
float deltaX() {
return (float)(mouseX-pmouseX);
}
// Mouse movement since last fame on Y axis
float deltaY() {
return (float)(mouseY-pmouseY);
}
// Convert user input into angle and amount to rotate to
void mouseRotate() {
double diag = Math.sqrt(Math.pow(width,2)+Math.pow(height,2));
double x = deltaX()/ diag * 50;
double y = -deltaY()/ diag * 50;
double angle = Math.sqrt( x*x + y*y );
currentMatrix.apply( rotate3D((float)y,(float)x,0,(float)angle) );
}
// Convert those values into a rotation matrix
PMatrix3D rotate3D(float getX, float getY, float getZ, float getA) {
float normalizer = sqrt( getX*getX + getY*getY + getZ*getZ );
float x = 0;
float y = 0;
float z = 0;
if (normalizer != 0) {
x = getX/normalizer;
y = getY/normalizer;
z = getZ/normalizer;
}
float x2 = pow(x,2);
float y2 = pow(y,2);
float z2 = 0;
float sina = sin(getA);
float f1cosa = 1-cos(getA);
PMatrix3D rotationMatrix = new PMatrix3D(
1+f1cosa*(x2-1), z*sina+x*y*f1cosa, -y*sina+x*z*f1cosa, 0,
-z*sina+x*y*f1cosa, 1+f1cosa*(y2-1), x*sina+y*z*f1cosa, 0,
y*sina+x*z*f1cosa, -x*sina+y*z*f1cosa, 1+f1cosa*(z2-1), 0,
0, 0, 0, 1
);
return rotationMatrix;
}
// Draw
draw() {
mouseRotate();
applyMatrix(currentMatrix);
object.render();
}
I thought that using this method would allow me to "stack" cumulative rotations relative to the screen and not relative to the object. But the result seems to always do the rotation relative to the object drawn.
I am not using a camera because I basically only want to rotate the object on itself. I'm actually a bit lost atm on what I should rotate and when to that the newly applied rotations are relative to the user, and the previously applied rotation are conserved.
I'm trying to understand the rotation of the matrices using WebGL.
I got this mat4() matrix and I have to apply these transformations :
m = translate(torsoHeight+1*headHeight, 5, 0.0);
m = mult(m, rotate(theta[head1Id], 1, 0, 0))
m = mult(m, rotate(theta[head2Id], 0, 1, 0));
m = mult(m, translate(0.0, -0.5*headHeight, 0.0));
figure[headId] = createNode( m, head, leftUpperArmId, null);
break;
I did not understand exactly how the mult function works. The first parameter is my matrix.
The theta[] is built in this way :
var theta = [0, 0, 0, 0, 0, 0, 180, 0, 180, 0, 0];
and
var headId = 1;
var head1Id = 1;
var head2Id = 10;
Am I right if I thought that the second parameter is another matrix build with the rotate() function ? In this case how does the rotate function work ?
rotate and translate are functions that create matrices.
rotate looks like it's arguments are (angle, vectorx, vectory, vectorz) to create a matrix rotating points around the given vectory.
mult is the standard mathematical multiplication for 4x4 matrices.
You probably should dig in linear algebra tutorials such as https://open.gl/transformations
I have a small function to create a new instance of a WorldObject.
I want to use osg::ref_ptr<osg::PositionAttitudeTransform> for translation and rotation but there is a problem I can't figure out.
I use setTranslation() with a Vec3 which works very well. But the Quat with makeRotation() just does nothing.
Here is the code:
osg::ref_ptr <osg::PositionAttitudeTransform> getWorldObjectClone(const char* name, osg::Vec3 position = osg::Vec3(0, 0, 0), osg::Vec3 rotation = osg::Vec3(0, 0, 0))
{
osg::ref_ptr <osg::PositionAttitudeTransform> tmp = new osg::PositionAttitudeTransform;
osg::Quat q(0, osg::Vec3(0, 0, 0));
tmp = dynamic_cast<osg::PositionAttitudeTransform*>(WorldObjects[name]->clone(osg::CopyOp::DEEP_COPY_ALL));
tmp->setPosition(position);
q.makeRotate(rotation.x(), 1, 0, 0);
q.makeRotate(rotation.y(), 0, 1, 0);
q.makeRotate(rotation.z(), 0, 0, 1);
tmp->setAttitude(q);
return tmp;
}
I tried rotation = {90,0,0} (degrees) and rotation = {1,0,0} (radians) but both have no effect. Is there an mistake in how the code is using the Quat?
The rotation method you are using works with radians.
If you want to rotate 90 degrees around the X axis, you need to call:
q.makeRotate(osg::PI_2, 1, 0, 0 );
// or the equivalent
q.makeRotate(osg::PI_2, osg::X_AXIS);
Keep in mind that every call to makeRotate will reset the full quaternion to the given rotation. If you're trying to concatenate several rotations, you have to multiply the corresponding quaternions.
For instance:
osg::Quar xRot, yRot;
// rotate 90 degrees around x
xRot.makeRotate(osg::PI_2, osg::X_AXIS);
// rotate 90 degrees around y
yRot.makeRotate(osg::PI_2, osg::Y_AXIS);
// concatenate the 2 into a resulting quat
osg::Quat fullRot = xRot * yRot;
I have tried this code. 540 is the left most x value of the box,3 is left most y value of the box,262 - width ,23 -height of the region which I am going to calculate the ratio of the white/black pixels. What I really wanted to do is detect the number of white/black pixel ratio in a specific region.I have calculate the coordinates for each cell (regions which I am going to specified)and try with this code.But the error in counting.
Can I please have an idea about this issue please..
I am really stuck here with my final year project.
CvSize cvSize = cvSize(img.width(), img.height());
IplImage image = cvCreateImage(cvSize, IPL_DEPTH_8U, 1);
IplImage image2 = cvCreateImage(cvSize, IPL_DEPTH_8U, 3);
cvCvtColor(image2, image, CV_RGB2GRAY);
cvSetImageROI(image2, cvRect(540,3,262,23));
//IplImage image2 = cvCreateImage(cvSize, IPL_DEPTH_8U, 3);
//
//cvCvtColor(arg0, arg1, arg2)
// cvCvtColor(image2, image, CV_RGB2GRAY);
//cvThreshold(image, image, 128, 255, CV_THRESH_BINARY);
CvLineIterator iterator = new CvLineIterator();
double sum = 0, green_sum = 0, red_sum = 0;
CvPoint p2 = new CvPoint(802,3);
CvPoint p1 = new CvPoint(540,26);
int lineCount = cvInitLineIterator(image2, p1, p2, iterator, 8, 0 );
for (int i = 0; i < lineCount; i++) {
sum += iterator.ptr().get() & 0xFF;
}
System.out.println("sum................"+sum);
CV_NEXT_LINE_POINT(iterator);
}
}
it gave the result as sum................0.0
I have really stuck with this..can you please give any solution for this issue please
Move CV_NEXT_LINE_POINT(iterator); line inside the for loop. Then it should work.
I have a really simple pixel shader:
float4 PixelShaderFunction(float2 uv : TEXCOORD0) : COLOR0
{
return float4(0, 1, 0, 1);
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_3_0 PixelShaderFunction();
}
}
I have a texture:
Vector4[] textureData = new Vector4[width * height];
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
textureData[y * width + x] = new Vector4(1, 0, 0, 1);
}
}
myTexture = new Texture2D(GraphicsDevice, width, height, false, SurfaceFormat.Vector4);
myTexture.SetData(textureData);
and I draw it with this code:
spriteBatch.Begin(SpriteSortMode.Texture,
BlendState.Additive,
SamplerState.PointWrap,
DepthStencilState.DepthRead,
RasterizerState.CullNone);
myEffect.CurrentTechnique.Passes[0].Apply();
spriteBatch.Draw(myTexture, new Rectangle(0, 0, width, height), Color.White);
spriteBatch.End();
I would have figured that by calling .Apply() on the pixel shader that the subsequent spriteBatch.Draw() call would have sent myTexure through my pixel shader. Since the pixel shader function always returns float4(0, 1, 0, 1) I expected the result to have been a green square, but instead it renders a red one, as if the pixel shader is not touching it.
What am I missing?
You're never actually calling Begin() on your shader, so it will still be using the default shader.
Also, there is a cleaner way of doing it now. You can pass your effect as a parameter to the SpriteBatch begin call, as detailed here.
Looks like I just needed to change SpriteSortMode to Immediate and change the pixel shader version from 3 to 2