Scenekit - Ploting the trajectory of an object - animation

Ok, I was not very sure about the earlier conversion formula as someone pointed out (I just read it somewhere and tried out), but now I am trying out an approach where I have a tiny Box created at the edge of the ship and I keep on reading its positions as rotation is applied to the ship..then using these positions to plot the geometry..Below is the relevant code:
scene = [SCNScene sceneNamed:#"ship.scn"];
_viewpoint1.scene = scene;
box = [SCNNode node];
box.geometry = [SCNBox boxWithWidth:.1 height:.1 length:.1 chamferRadius:.1];
box.physicsBody = [SCNPhysicsBody staticBody];
// position the box to the end of the ship
box.position = SCNVector3Make(box.position.x, box.position.y, box.position.z +5.6);
[_viewpoint1.scene.rootNode addChildNode:box];
As the app receives quaternions, the ship rotates and so does the box at the tip, and I capture the positions of that box and store it in positions array.
_viewpoint1.scene.rootNode.orientation = SCNVector4Make(f_q0, f_q1, f_q2, f_q3);
SCNVector3 pt = [box convertPosition:SCNVector3Make(box.position.x, box.position.y, box.position.z) toNode:nil];
[positions addObject:[NSValue valueWithSCNVector3:SCNVector3Make(pt.x, pt.y, pt.z)]];
Later, using this positions array to create the geometry as below:
SCNVector3 positions2[pointCount];
for (int j = 0; j < pointCount; j++)
{
SCNVector3 value = [[positions objectAtIndex:j] SCNVector3Value];
SCNVector3 value1 = [_viewpoint2.scene.rootNode convertPosition:value fromNode:nil];
positions2[j] = value1;
};
SCNGeometrySource *vertexSource1 =
[SCNGeometrySource geometrySourceWithVertices:positions2 count:pointCount];
NSData *indexData1 = [NSData dataWithBytes:indices2
length:sizeof(indices2)];
SCNGeometryElement *element1 =
[SCNGeometryElement geometryElementWithData:indexData1
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:pointCount
bytesPerIndex:sizeof(int)];
SCNGeometry *geometry1 = [SCNGeometry geometryWithSources:#[vertexSource1]
elements:#[element1]];
SCNNode* lineNode1 = [SCNNode nodeWithGeometry:geometry1];
[_viewpoint2.scene.rootNode addChildNode:lineNode1];
What I observe is that the arc drawn is bigger than the ship rotation, so if the ship rotates by 30 deg, the arc drawn is about 60 deg or so..it should be the same as the ship rotation...wondering what I am doing incorrect here?

Related

ARAME/THREE take camera rotation into account when position object in front of camera

Hi have a component that moves an object from its position into scene to a position in front of the camera.
This works fine if the camera has not been rotated, but I can't get it to account for the camera rotation when it has been rotated.
//Copy the initial datas
this._initialPosition = this._threeElement.position.clone()
this._initialQuaternion = this._threeElement.quaternion.clone()
//Convert fov + reduce it
let fovInRad = AFRAME.THREE.Math.degToRad(this._cameraEntity.fov)/2
let ratio=window.innerWidth/window.innerHeight //Assuming the FOV is vertical
let pLocal,cPos
let sizeY = this._size.y
let sizeX = this._size.x
let sizeZ = this._size.z
sizeX*=this._threeElement.scale.x
sizeY*=this._threeElement.scale.y
sizeZ*=this._threeElement.scale.x
const uSpace= 0.8
sizeY/=2;sizeX/=2
sizeY*=uSpace;sizeX*=uSpace
let tanFov = Math.tan(fovInRad)
let distY = sizeY/tanFov
let distX = ((sizeX/(ratio*tanFov)) < distY) ? distY : sizeX/(ratio*tanFov)
pLocal = new AFRAME.THREE.Vector3(0, 0, -(distX + sizeZ))
cPos = this._cameraEntity.position.clone()
cPos.y += 1.6
this._targetPosition = pLocal.applyMatrix4(this._cameraEntity.matrixWorld.clone())
this._threeElement.parent.worldToLocal(this._targetPosition)
let targetLook = cPos.applyMatrix4(this._cameraEntity.matrixWorld.clone())
this._threeElement.parent.worldToLocal(targetLook)
this._threeElement.position.copy(this._targetPosition)
this._threeElement.lookAt(targetLook)
this._targetQuaternion = this._threeElement.quaternion.clone()
The issue seems to be when I apply the matrix of the camera.
Any idea how to find this target position relative to where the camera is facing
New three.vector3(0,0,-camera.near).applyQuaternion(camera.quaternion).add(camera.position)

DirectX Camera Jitter

Source Code.
I'm making a small DirectX Demo Scene but my camera seems to "snap" to odd positions when I attempt to rotate it. It only happens when rotating and I can't seem to find out what is causing it.
// Get the cursor pos and calculate change in movement
POINT cursorPos;
GetCursorPos(&cursorPos);
LONG deltaX = oldCursorPos.x - cursorPos.x;
LONG deltaY = oldCursorPos.y - cursorPos.y;
// Hold right click to rotate
if (GetAsyncKeyState(VK_RBUTTON))
{
XMMATRIX xRotation = XMMatrixRotationY(((float)-deltaX * (float)timer.Delta()));
XMMATRIX yRotation = XMMatrixRotationX(((float)-deltaY * (float)timer.Delta()));
XMMATRIX view = XMLoadFloat4x4(&cameraMatrix);
XMFLOAT4 viewVector = XMFLOAT4(cameraMatrix.m[3][0], cameraMatrix.m[3][1], cameraMatrix.m[3][2], 1.0f);
for (size_t i = 0; i < 3; i++) { cameraMatrix.m[3][i] = 0.0f; }
view = view * xRotation;
view = yRotation * view;
XMStoreFloat4x4(&cameraMatrix, view);
cameraMatrix.m[3][0] = viewVector.x;
cameraMatrix.m[3][1] = viewVector.y;
cameraMatrix.m[3][2] = viewVector.z;
}
oldCursorPos = cursorPos;
Above is the code that performs the rotations to the camera matrix, below is the code I use to set the view matrix equal to the inverse of the camera matrix. Both of these operations are done every frame.
XMMATRIX camera = XMLoadFloat4x4(&cameraMatrix);
XMMATRIX view = XMMatrixInverse(NULL, camera);
XMStoreFloat4x4(&sceneMatrix.viewMatrix, view);
Both of these snippets don't seem to be the problem though, as I have triple checked my notes and this is exactly how my instructor expects it to be done. This bug happens in debug and release mode.
I put the source code in the link above if an attractive person such as yourself dare look at the rest of the code. Beware: It is a small demo application so try not to cringe at the hard-coded objects and such.
I'm not certain it's causing your problem, as a simple demo might have a consistent frame-rate, but you shouldn't be scaling mouse movement by a time delta.
These lines:
XMMATRIX xRotation = XMMatrixRotationY(((float)-deltaX * (float)timer.Delta()));
XMMATRIX yRotation = XMMatrixRotationX(((float)-deltaY * (float)timer.Delta()));
Should be
float fRotationSpeed = 0.01f; // Tweak this.
XMMATRIX xRotation = XMMatrixRotationY(((float)-deltaX * fRotationSpeed));
XMMATRIX yRotation = XMMatrixRotationX(((float)-deltaY * fRotationSpeed));

Emgu CV draw rotated rectangle

I'm looking for few days a solution to draw rectangle on image frame. Basically I'm using CvInvoke.cvRectangle method to draw rectangle on image because I need antialiased rect.
But problem is when I need to rotate a given shape for given angle. I can't find any good solution.
I have tryed to draw rectangle on separate frame then rotate hole frame and apply this new image on top of my base frame. But in this solution there is a problem with antialiasing. It's not working.
I'm working on simple application that should allow draw few kinds of shape, resize them and rotation for given angle.
Any idea how to achive this?
The best way I found to draw a minimum enclosing rectangle on the contour is using the Polylines() function which uses vertices that are returned from MinAreaRect() function. There are surely other ways to do it as well. Here is the code walk down:
// Find contours
var contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(image, contours, hierarchy, RetrType.Tree, ChainApproxMethod.ChainApproxSimple);
// According to your metric, get an index of the contour you want to find the min enclosing rectangle for
int index = 2; // Say, 2nd index works for you.
var rectangle = CvInvoke.MinAreaRect(contours[index]);
Point[] vertices = Array.ConvertAll(rectangle.GetVertices(), Point.Round);
CvInvoke.Polylines(image, vertices, true, new MCvScalar(0, 0, 255), 5);
The result can be visualized in the image below, in red is the minimum enclosing rectangle.
I use C# and EMGU.CV(4.1), and I think this code will not be difficult to transfer to any platform.
Add function in the in your helper:
public static Mat DrawRect(Mat input, RotatedRect rect, MCvScalar color = default(MCvScalar),
int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
{
var v = rect.GetVertices();
var prevPoint = v[0];
var firstPoint = prevPoint;
var nextPoint = prevPoint;
var lastPoint = nextPoint;
for (var i = 1; i < v.Length; i++)
{
nextPoint = v[i];
CvInvoke.Line(input, Point.Round(prevPoint), Point.Round(nextPoint), color, thickness, lineType, shift);
prevPoint = nextPoint;
lastPoint = prevPoint;
}
CvInvoke.Line(input, Point.Round(lastPoint), Point.Round(firstPoint), color, thickness, lineType, shift);
return input;
}
This draws roteted rectangle by points. Here used rounding points by method Point.Round becose RotatedRect has points in float coordinates and CvInvoke.Line takes points as integer.
Use:
var mat = Mat.Zeros(200, 200, DepthType.Cv8U, 3);
mat.GetValueRange();
var rRect = new RotatedRect(new PointF(100, 100), new SizeF(100, 50), 30);
DrawRect(mat, rRect,new MCvScalar(255,0,0));
var brect = CvInvoke.BoundingRectangle(new VectorOfPointF(rRect.GetVertices()));
CvInvoke.Rectangle(mat, brect, new MCvScalar(0,255,0), 1, LineType.EightConnected, 0);
Result:
You should read the OpenCV documentation.
There is a RotatedRectangle class that you can use for your task. You can specify the angle by which the rectangle will be rotated.
Here is a sample code (taken from the docs) for drawing a rotated rectangle:
Mat image(200, 200, CV_8UC3, Scalar(0));
RotatedRect rRect = RotatedRect(Point2f(100,100), Size2f(100,50), 30);
Point2f vertices[4];
rRect.points(vertices);
for (int i = 0; i < 4; i++)
line(image, vertices[i], vertices[(i+1)%4], Scalar(0,255,0));
Rect brect = rRect.boundingRect();
rectangle(image, brect, Scalar(255,0,0));
imshow("rectangles", image);
waitKey(0);
Here is the result:

Scene Kit - Dragging an Object

I’m trying to drag a chess piece with the mouse/trackpad using Scene Kit. All objects (board and pieces) are children of the root node, loaded from a Collada file.
I found a helpful description of the process elsewhere on Stack Overflow. Using that description I wrote the initial version of the code below. My problem is the disparity between the click coordinates and the piece node position — their coordinates are different orders of magnitude. I remain unclear on how to match them up — put them in the “same universe”. I’ve tried a number of suggestions from the Apple forums along with flights of fancy of my own.
Here’s my current attempt, mostly reverted back to the original version based on the link above, along with logged coordinate values along the way. The result is that dragging a chess piece causes it to abruptly jump off screen:
- (NSPoint)
viewPointForEvent: (NSEvent *) event_
{
NSPoint windowPoint = [event_ locationInWindow];
NSPoint viewPoint = [self.view convertPoint: windowPoint
fromView: nil];
return viewPoint;
}
- (SCNHitTestResult *)
hitTestResultForEvent: (NSEvent *) event_
{
NSPoint viewPoint = [self viewPointForEvent: event_];
CGPoint cgPoint = CGPointMake (viewPoint.x, viewPoint.y);
NSArray * points = [(SCNView *) self.view hitTest: cgPoint
options: #{}];
return points.firstObject;
}
- (void)
mouseDown: (NSEvent *) theEvent
{
SCNHitTestResult * result = [self hitTestResultForEvent: theEvent];
SCNVector3 clickWorldCoordinates = result.worldCoordinates;
log output: clickWorldCoordinates x 208.124578, y -12827.223365, z 3163.659073
SCNVector3 screenCoordinates = [(SCNView *) self.view projectPoint: clickWorldCoordinates];
log output: screenCoordinates x 245.128906, y 149.335938, z 0.985565
// save the z coordinate for use in mouseDragged
mouseDownClickOnObjectZCoordinate = screenCoordinates.z;
selectedPiece = result.node; // save selected piece for use in mouseDragged
SCNVector3 piecePosition = selectedPiece.position;
log output: piecePosition x -18.200000, y 6.483060, z 2.350000
offsetOfMouseClickFromPiece.x = clickWorldCoordinates.x - piecePosition.x;
offsetOfMouseClickFromPiece.y = clickWorldCoordinates.y - piecePosition.y;
offsetOfMouseClickFromPiece.z = clickWorldCoordinates.z - piecePosition.z;
log output: offsetOfMouseClickFromPiece x 226.324578, y -12833.706425, z 3161.309073
}
- (void)
mouseDragged: (NSEvent *) theEvent;
{
NSPoint viewClickPoint = [self viewPointForEvent: theEvent];
SCNVector3 clickCoordinates;
clickCoordinates.x = viewClickPoint.x;
clickCoordinates.y = viewClickPoint.y;
clickCoordinates.z = mouseDownClickOnObjectZCoordinate;
log output: clickCoordinates x 246.128906, y 0.000000, z 0.985565
log output: pieceWorldTransform =
m11 = 242.15889219510001, m12 = -0.000045609300002524833, m13 = -0.00000721691076126, m14 = 0,
m21 = 0.0000072168760805499971, m22 = -0.000039452697396149999, m23 = 242.15890446329999, m24 = 0,
m31 = -0.000045609300002524833, m32 = -242.15889219510001, m33 = -0.000039452676995750002, m34 = 0,
m41 = -4268.2349924762348, m42 = -12724.050221935429, m43 = 4852.6652710104272, m44 = 1)
SCNVector3 newPiecePosition;
newPiecePosition.x = offsetOfMouseClickFromPiece.x + clickCoordinates.x;
newPiecePosition.y = offsetOfMouseClickFromPiece.y + clickCoordinates.y;
newPiecePosition.z = offsetOfMouseClickFromPiece.z + clickCoordinates.z;
log output: newPiecePosition x 472.453484, y -12833.706425, z 3162.294639
selectedPiece.position = newPiecePosition;
}
Up to this point, I’ve gotten a lot of interesting and useful comments and advice. But I’ve realized that to move forward, I’m probably going to need a working code sample which shows the secret sauce which allows clicks and vectors to exist in the “same universe”.
You don't need to "play around" to find the origin. You can do code like this:
let originZ = sceneView.projectPoint(SCNVector3Zero).z
let viewLocation = /* Location in SCNView coordinate system */
let viewLocationWithOriginZ = SCNVector3(
x: Float(viewLocation.x), // X & Y are in view coordinate system
y: Float(viewLocation.y),
z: originZ // Z is in scene coordinate system
)
var position = sceneView.unprojectPoint(viewLocationWithOriginZ)
/* "position" is in the scene's coordinate system with z of 0 */
You have to use the method unprojectPoint. The key is to play around with z-values to find the correct depth of your scene relative to your click point (at least that's how I understood it). I had a similar problem trying to do touch-dragging on iOS. This is what I did to solve it:
let convertedTranslation = scnView.unprojectPoint(SCNVector3Make(Float(translation.x), Float(translation.y), 1.0))
Notice that the z-value for the method is 1. From the documentation, you can see that it can be a value between -1 and 1. -1 and 0 did not work for me, but 1 did.

draw uiimage along CGMutablePathRef

How do i draw a custom uiimage along a CGMutablePathRef ? I can get the points from CGMutablePathRef but it does not give the smooth points that create the path.
I want to know if i can extract all of them plus the one that creat the smooth path.
i've used CGPathApply but i only get the control points, and when i draw my image it does not stay smooth as the original CGMutablePathRef
void pathFunction(void *info, const CGPathElement *element){
if (element->type == kCGPathElementAddQuadCurveToPoint)
{
CGPoint firstPoint = element->points[1];
CGPoint lastPoint = element->points[0];
UIImage *tex = [UIImage imageNamed:#"myimage.png"];
CGPoint vector = CGPointMake(lastPoint.x - firstPoint.x, lastPoint.y - firstPoint.y);
CGFloat distance = hypotf(vector.x, vector.y);
vector.x /= distance;
vector.y /= distance;
for (CGFloat i = 0; i < distance; i += 1.0f) {
CGPoint p = CGPointMake(firstPoint.x + i * vector.x, firstPoint.y + i * vector.y);
[tex drawAtPoint:p blendMode:kCGBlendModeNormal alpha:1.0f];
}
}
}
It seems like you are looking for the function that is used to draw a cubic Bézier curve from a start point and an end point and two control points.
start⋅(1-t)^3 + 3⋅c1⋅t(1-t)^2 + 3⋅c2⋅t^2(1-t) + end⋅t^3
By setting a value for t between 0 and 1 you will get a point on the curve at a certain percentage of the curve length. I have a short description of how it works in the end of this blog post.
Update
To find the point to draw the image somewhere between the start and end points you pick a t (for example 0.36 and use it to calculate the x and y value of that points.
CGPoint start, end, c1, c2; // set to some value of course
CGFloat t = 0.36;
CGFloat x = start.x*pow((1-t),3) + 3*c1.x*t*pow((1-t),2) + 3*c2.x*pow(t,2)*(1-t) + end.x*pow(t,3);
CGFloat y = start.y*pow((1-t),3) + 3*c1.y*t*pow((1-t),2) + 3*c2.y*pow(t,2)*(1-t) + end.y*pow(t,3);
CGPoint point = CGPointMake(x,y); // this is 36% along the line of the curve
Which given the path in the image would correspond to the orange circle
If you do this for many points along the curve you will have many images positioned along the curve.
Update 2
You are missing that kCGPathElementAddQuadCurveToPoint (implicitly) has 3 points: start (the current/previous points, the control point (points[0]) and the end point (points[1]). For a quad curve both control points are the same so c1 = c2;. For kCGPathElementAddCurveToPoint you would get 2 different control points.

Resources