I'm trying to determine how far away the camera needs to be from my object3D which is a collection of meshes in order for the entire model to be framed in the viewport.
I get the object3D size like this:
public getObjectSize ( target: THREE.Object3D ): Size {
let box: THREE.Box3 = new THREE.Box3().setFromObject(target);
let size: Size = {
depth: (-1 * box.min.z) + box.max.z,
height: (-1 * box.min.y) + box.max.y,
width: (-1 * box.min.x) + box.max.x
};
return size;
}
Next I use trig in an attempt to determine how far back the camera needs to be based on that box size in order for the entire box to be visible.
private determinCameraDistance(): number {
let cameraDistance: number;
let halfFOVInRadians: number = this.geometryService.getRadians(this.FOV / 2);
let height: number = this.productModelSizeService.getObjectSize(this.viewService.primaryView.scene).height;
let width: number = this.productModelSizeService.getObjectSize(this.viewService.primaryView.scene).width;
cameraDistance = ((width / 2) / Math.tan(halfHorizontalFOVInRadians));
return cameraDistance;
}
The math all works out on paper and the length of the adjacent side of the triangle (the camera distance) can be verified using a^2 + b^2 = c^2. However for some reason the distance returned is 10.4204 while the camera distance I need to show the entire object3D is actually 95 (determined by hard coding the value) which results in only being able to see a tiny portion of my model.
Any ideas on what I might be doing wrong, or better way to determine this. It seems to me like there is some kind of unit conversion that I'm missing when going from the box sizing units to camera distance units
Actual numbers used in the calculation:
FOV = 110 degrees,
Object3D size: {
Depth: 11.6224,
Height: 18.4,
Width: 29.7638
}
So we take half the field of view to create a right triangle with the adjacent side placed along our camera distance, that's 55 degrees. We then use the formula Degrees * PI / 180 to convert 55 degrees into the radian equivalent, which is .9599. Next we take half the object3D width, again to create a right triangle, which is 14.8819. We can now take our half width and divide it by the tangent of the FOV (in radians), this gives us the length for the adjacent side / camera distance of 10.4204.
We can further verify this is the correct length of this side I'll get the length of the hypotenuse using SOHCAHTOA again:
Sin(55) = 14.8819 / y
.8192 * y = 14.8819
y = 14.8819 / .8192
y = 18.1664
Now using this we can use the pythagorean theorem solve for b to check our math.
14.8819^2 + b^2 = 18.1664^2
221.4709 + b^2 = 330.0018
b^2 = 108.5835
b = 10.4203 (we're off by .0001 but that's due to rounding)
The issue ended up being that in THREE.js field of view represents the vertical viewing area. I had been assuming that THREE like Maya and other applications uses Field of View as the horizontal viewing area.
Multiplying the FOV that I was getting by the Aspect Ratio gives me the correct horizontal field of view, which results in a Camera distance of ~92.
Related
(More info at end)----->
I am trying to render a small picture-in-picture display over my scene. The PiP is just a smaller texture, but it is intended to reveal secret objects in the scene when it is placed over them.
To do this, I want to render my scene, then render the SAME scene on the smaller texture, but with the exact same positioning as the main scene. The intended result would be something like this:
My problem is... I cannot get the scene on the smaller texture to match up 1:1. I keep trying various kludges, but ultimately I suspect that I need to do something to the projection matrix to pan it over to the location of the frame. I can get it to zoom correctly...just can't get it to pan.
Can anyone suggest what I need to do to my projection matrix to render my scene 1:1 (but panned by x,y) onto a smaller texture?
The data I have:
Resolution of the full-screen framebuffer
Resolution of the smaller texture
XY coordinate where I want to draw the smaller texture as an overlay sprite
The world/view/projection matrices from the original full-screen scene
The viewport from the original full-screen scene
(Edit)
Here is the function I use to produce the 3D camera:
void Make3DCamera(Vector theCameraPos, Vector theLookAt, Vector theUpVector, float theFOV, Point theRez, Matrix& theViewMatrix,Matrix& theProjectionMatrix)
{
Matrix aCombinedViewMatrix;
Matrix aViewMatrix;
aCombinedViewMatrix.Scale(1,1,-1);
theCameraPos.mZ*=-1;
theLookAt.mZ*=-1;
theUpVector.mZ*=-1;
aCombinedViewMatrix.Translate(-theCameraPos);
Vector aLookAtVector=theLookAt-theCameraPos;
Vector aSideVector=theUpVector.Cross(aLookAtVector);
theUpVector=aLookAtVector.Cross(aSideVector);
aLookAtVector.Normalize();
aSideVector.Normalize();
theUpVector.Normalize();
aViewMatrix.mData.m[0][0] = -aSideVector.mX;
aViewMatrix.mData.m[1][0] = -aSideVector.mY;
aViewMatrix.mData.m[2][0] = -aSideVector.mZ;
aViewMatrix.mData.m[3][0] = 0;
aViewMatrix.mData.m[0][1] = -theUpVector.mX;
aViewMatrix.mData.m[1][1] = -theUpVector.mY;
aViewMatrix.mData.m[2][1] = -theUpVector.mZ;
aViewMatrix.mData.m[3][1] = 0;
aViewMatrix.mData.m[0][2] = aLookAtVector.mX;
aViewMatrix.mData.m[1][2] = aLookAtVector.mY;
aViewMatrix.mData.m[2][2] = aLookAtVector.mZ;
aViewMatrix.mData.m[3][2] = 0;
aViewMatrix.mData.m[0][3] = 0;
aViewMatrix.mData.m[1][3] = 0;
aViewMatrix.mData.m[2][3] = 0;
aViewMatrix.mData.m[3][3] = 1;
if (gG.mRenderToSprite) aViewMatrix.Scale(1,-1,1);
aCombinedViewMatrix*=aViewMatrix;
// Projection Matrix
float aAspect = (float) theRez.mX / (float) theRez.mY;
float aNear = gG.mZRange.mData1;
float aFar = gG.mZRange.mData2;
float aWidth = gMath.Cos(theFOV / 2.0f);
float aHeight = gMath.Cos(theFOV / 2.0f);
if (aAspect > 1.0) aWidth /= aAspect;
else aHeight *= aAspect;
float s = gMath.Sin(theFOV / 2.0f);
float d = 1.0f - aNear / aFar;
Matrix aPerspectiveMatrix;
aPerspectiveMatrix.mData.m[0][0] = aWidth;
aPerspectiveMatrix.mData.m[1][0] = 0;
aPerspectiveMatrix.mData.m[2][0] = gG.m3DOffset.mX/theRez.mX/2;
aPerspectiveMatrix.mData.m[3][0] = 0;
aPerspectiveMatrix.mData.m[0][1] = 0;
aPerspectiveMatrix.mData.m[1][1] = aHeight;
aPerspectiveMatrix.mData.m[2][1] = gG.m3DOffset.mY/theRez.mY/2;
aPerspectiveMatrix.mData.m[3][1] = 0;
aPerspectiveMatrix.mData.m[0][2] = 0;
aPerspectiveMatrix.mData.m[1][2] = 0;
aPerspectiveMatrix.mData.m[2][2] = s / d;
aPerspectiveMatrix.mData.m[3][2] = -(s * aNear / d);
aPerspectiveMatrix.mData.m[0][3] = 0;
aPerspectiveMatrix.mData.m[1][3] = 0;
aPerspectiveMatrix.mData.m[2][3] = s;
aPerspectiveMatrix.mData.m[3][3] = 0;
theViewMatrix=aCombinedViewMatrix;
theProjectionMatrix=aPerspectiveMatrix;
}
Edit to add more information:
Just playing and tweaking numbers, I have come to a "close" result. However the "close" result requires a multiplication by some kludge numbers, that I don't understand.
Here's what I'm doing to to perspective matrix to produce my close result:
//Before calling Make3DCamera, adjusting FOV:
aFOV*=smallerTexture.HeightF()/normalRenderSize.HeightF(); // Zoom it
aFOV*=1.02f // <- WTH is this?
//Then, to pan the camera over to the x/y position I want, I do:
Matrix aPM=GetCurrentProjectionMatrix();
float aX=(screenX-normalRenderSize.WidthF()/2.0f)/2.0f;
float aY=(screenY-normalRenderSize.HeightF()/2.0f)/2.0f;
aX*=1.07f; // <- WTH is this?
aY*=1.07f; // <- WTH is this?
aPM.mData.m[2][0]=-aX/normalRenderSize.HeightF();
aPM.mData.m[2][1]=-aY/normalRenderSize.HeightF();
SetCurrentProjectionMatrix(aPM);
When I do this, my new picture is VERY close... but not exactly perfect-- the small render tends to drift away from "center" the further the "magic window" is from the center. Without the kludge number, the drift away from center with the magic window is very pronounced.
The kludge numbers 1.02f for zoom and 1.07 for pan reduce the inaccuracies and drift to a fraction of a pixel, but those numbers must be a ratio from somewhere, right? They work at ANY RESOLUTION, though-- so I have have a 1280x800 screen and a 256,256 magic window texture... if I change the screen to 1024x768, it all still works.
Where the heck are these numbers coming from?
If you don't care about sub-optimal performance (i.e., drawing the whole scene twice) and if you don't need the smaller scene in a texture, an easy way to obtain the overlay with pixel perfect precision is:
Set up main scene (model/view/projection matrices, etc.) and draw it as you are now.
Use glScissor to set the rectangle for the overlay. glScissor takes the screen-space x, y, width, and height and discards anything outside that rectangle. It looks like you have those four data items already, so you should be good to go.
Call glEnable(GL_SCISSOR_TEST) to actually turn on the test.
Set the shader variables (if you're using shaders) for drawing the greyscale scene/hidden objects/etc. You still use the same view and projection matrices that you used for the main scene.
Draw the greyscale scene/hidden objects/etc.
Call glDisable(GL_SCISSOR_TEST) so you won't be scissoring at the start of the next frame.
Draw the red overlay border, if desired.
Now, if you actually need the overlay in its own texture for some reason, this probably won't be adequate...it could be made to work either with framebuffer objects and/or pixel readback, but this would be less efficient.
Most people completely overcomplicate such issues. There is absolutely no magic to applying transformations after applying the projection matrix.
If you have a projection matrix P (and I'm assuming default OpenGL conventions here where P is constructed in a way that the vector is post-multiplied to the matrix, so for an eye space vector v_eye, we get v_clip = P * v_eye), you can simply pre-multiply some other translate and scale transforms to cut out any region of interest.
Assume you have a viewport of size w_view * h_view pixels, and you want to find a projection matrix which renders only a tile w_tile * h_tile pixels , beginning at pixel location (x_tile, y_tile) (again, assuming default GL conventions here, window space origin is bottom left, so y_tile is measured from the bottom). Also note that the _tile coordinates are to be interpreted relative to the viewport, in the typical case, that would start at (0,0) and have the size of your full framebuffer, but this is by no means required nor assumed here.
Since after applying the projection matrix we are in clip space, we need to transform our coordinates from window space pixels to clip space. Note that clip space is a 4D homogeneous space, but we can use any w value we like (except 0) to represent any point (as a point in the 3D space we care about forms a line in the 4D space we work in), so let's just use w=1 for simplicity's sake.
The view volume in clip space is denoted by the [-w,w] range, so in the w=1 hyperplane, it is [-1,1]. Converting our tile into this space yields:
x_clip = 2 * (x_tile / w_view) -1
y_clip = 2 * (y_tile / h_view) -1
w_clip = 2 * (w_tile / w_view) -1
h_clip = 2 * (h_tile / h_view) -1
We now just need to translate the objects such that the center of the tile is moved to the center of the view volume, which by definition is the origin, and scale the w_clip * h_clip sized region to the full [-1,1] extent in each dimension.
That means:
T = translate(-(x_clip + 0.5*w_clip), -(y_clip + 0.5 *h_clip), 0)
S = scale(2.0/w_clip, 2.0/h_clip, 1.0)
We can now create the modified projection matrix P' as P' = S * T * P, and that's all there is. Rendering with P' instead of P will render exactly the region of your tile to whatever viewport you are using, so for it to be pixel-exact with respect to your original viewport, you must now render with a viewport which is also w_tile * h_tile pixels big.
Note that there is also another approach: The viewport is not clamped against the framebuffer you're rendering to. It is actually valid to provide negative values for x and y. If your framebuffer for rendering your tile into is exactly w_tile * h_tile pixels, you simply could set glViewport(-x_tile, -y_tile, x_tile + w_tile, y_tile + h_tile) and render with the unmodified projection matrix P instead.
I have a sphere in threejs, and I'd like a ring to animate over the top of it.
I have the following progress:
https://codepen.io/EightArmsHQ/pen/zYRdQOw/2919f1a1bdcd2643390efc33bd4b73c9?editors=0010
In the animate function, I call:
const scale = Math.cos((circlePos / this.globeRadius) * Math.PI * 0.5);
console.log(scale);
this.ring.scale.set(scale, scale, 1);
My understanding is that the sin and cos functions are exactly what I need to work out how far around the circle the ring has gotten to. However, the animation actually shows the ring fall inside the sphere, before eventually hitting the 0 scale at the outside of the sphere.
Ideally, I'd also like to just be changing the radius of the sphere but I cannot work out how to do that either, so I think it may be an issue of using the scale function.
How can I keep the ring on the surface of the sphere?
Not quite. Consider this:
You have a right triangle whose bases are your x and y, with a hypotenuse of r = globeRadius. So by Pythagoras' theorem, we have:
x2 + y2 = r2.
So if we solve for the height, y, we get:
y = √(r2 - x2).
Thus, in your code, you could write it e.g. like this:
const scale = Math.sqrt(this.globeRadius * this.globeRadius - circlePos * circlePos);
However, this is the scale in terms of world units, not relative to the objects. So for this to work, you need to either divide by your radius again, or just initialise your ring with radius 1:
this.ringGeometry = new THREE.RingGeometry(1, 1.03, 32);
Here I gave it an arbitrary ring width of 0.03 - you may of course adjust it to your own needs.
This issue I can't really solve till now although I read through several articles already - hope somebody can help here.
Facts (know variables):
Two moving objects on earth surface, both with current know latitude/longitude coordinates.
The speed of both objects is know as well (in m/s).
The direction (angle) of one object is know.
Now I want to calculate the direction (angle) of the second moving object needed to intersect with (hit) the other moving object.
As the distance between the objects is small (in the range of only 5-20 km) and no very high accuracy is needed, it is OK to consider the earth surface as plane.
Therefore I already tried working with this great library:
http://www.codeproject.com/Articles/990452/Interception-of-Two-Moving-Objects-in-D-Space
But I don't really get that to work as I don't know how to convert speed in m/s back and forth to latitude/longitude velocity vectors.
To better understand the problem here an example with values:
Moving object 1 (runner):
Current location: latitude: 38.565, longitude: -98.513
Speed: 100 m/s
Direction: 270°
Moving object 2 (chaser):
Current location: latitude: 38.724, longitude: -98.449
Speed: 150 m/s
Direction: To be calculated
Any help on that would be highly appreciated, thanks in advance!
If the distances are small and you convert the latitude/longitude coordinates to x,y coordinates on a flat plane as you suggest, using e.g. the answers to this question, then the math needed to solve this problem is quite straightforward.
The image below shows the location of chaser and target at time 0 (red and blue dot), their velocity (the circles show their range after time 1, 2, 3...) and the trajectory of the target (blue ray).
The green curve contains all locations where target and chaser could be at the same moment if they keep moving at their current velocity. The intersection of this curve with the target's trajectory is the interception point (pink dot).
If the interception happens after time t, then the distance travelled by chaser and target is t.vc and t.vt. We also know the distance d between the chaser and target at time 0, and the angle α between the line segment connecting chaser and target and the target's trajectory. If we enter these into the cosine rule we get:
(t . vc)2 = (t . vt)2 + d2 - 2 . d . t . vt . cos(α)
Which transforms to this quadratic equation when we solve for time t:
(vc2 - vt2) . t2 + (2 . d . vt . cos(α)) . t - d2 = 0
If we solve this using the quadratic formula:
t = (- b ± √(b2 - 4 . a . c)) / (2 . a)
where:
a = vc2 - vt2
b = 2 . d . vt . cos(α)
c = - d2
a non-negative discriminant means interception is possible, and the root obtained by using addition in the quadratic formula is the first or only time of interception.
As you can see in the image above, if the chaser is slower than the target, there are potentially two interception points if the target moves towards the chaser (blue ray intersects with green curve), but none if the target moves away from the chaser. Using addition in the quadratic formula gives the first interception point (using subtraction would give the second).
We can then calculate the position of the target after time t, which is the interception point, and the direction from the chaser to this point.
The JavaScript code snippet below demonstrates this method, with the values from both images. It uses angles in radians and the orientation: 0 = right, π/2 = up, π = left, -π/2 = down. The case where the chaser and target have equal velocity (and the curve is a straight line) is solved using the isosceles triangle theorem, because otherwise it would lead to a division by zero in the quadratic equation.
function intercept(chaser, target) {
var dist = distance(chaser, target);
var dir = direction(chaser, target);
var alpha = Math.PI + dir - target.dir;
// EQUAL VELOCITY CASE: SOLVE BY ISOSCELES TRIANGLE THEOREM
if (chaser.vel == target.vel) {
if (Math.cos(alpha) < 0) return NaN; // INTERCEPTION IMPOSSIBLE
return (dir + alpha) % (Math.PI * 2);
}
// GENERAL CASE: SOLVE BY COSINE RULE AND QUADRATIC EQUATION
var a = Math.pow(chaser.vel, 2) - Math.pow(target.vel, 2);
var b = 2 * dist * target.vel * Math.cos(alpha);
var c = -Math.pow(dist, 2);
var disc = Math.pow(b, 2) - 4 * a * c;
if (disc < 0) return NaN; // INTERCEPTION IMPOSSIBLE
var time = (Math.sqrt(disc) - b) / (2 * a);
var x = target.x + target.vel * time * Math.cos(target.dir);
var y = target.y + target.vel * time * Math.sin(target.dir);
return direction(chaser, {x: x, y: y});
function distance(p, q) {
return Math.sqrt(Math.pow(p.x - q.x, 2) + Math.pow(p.y - q.y, 2));
}
function direction(p, q) {
return Math.atan2(q.y - p.y, q.x - p.x);
}
}
var chaser = {x: 196, y: -45, vel: 100};
var target = {x: 139, y: -312, vel: 75, dir: 0.1815142422};
document.write(intercept(chaser, target) + "<br>"); // -1.015 rad = -58.17°
var chaser = {x: 369, y: -235, vel: 37.5};
var target = {x: 139, y: -376, vel: 75, dir: 0.1815142422};
document.write(intercept(chaser, target) + "<br>"); // -1.787 rad = -102.4°
other interception points
The green curve effectively divides the 2D plane into a zone where the target will arrive first, and a zone where the chaser will arrive first. If you want the chaser and the target to move at constant speed and collide (imagine e.g. a torpedo being fired at a moving ship) then you'd aim for a point on the curve, where the two will arrive at the same time, as explained above.
However, if the chaser can go to a point and wait there for the target to arrive (imagine e.g. a person trying to catch a bus), then every point on the target's trajectory that is within the "chaser's zone" could be the interception point.
In the first image (slower target) the curve is a circle around the target, and once the target moves out of this circle (to the right of the interception point indicated in pink), the chaser could always get there first and wait for the target. This could be useful if you want a safety margin in case the chaser's or target's speed isn't constant.
In the second image (faster target) the curve is a circle around the chaser, and every point on the target's trajectory inside this circle could be the interception point. The chaser could e.g. move perpendicular to the target's trajectory, in order to minimise the distance travelled, or aim at a point halfway between the first and last interception point to maximise the waiting time.
I'm finding the angle between the centre of my circle and the triangle in degrees like so:
atan2((centre.y-triangle.y), (centre.x-triangle.x) * 180 / PI - 90
I'm setting the rotation of my triangle object which takes degrees as a parameter. The issue is all of my triangles are not rotated outwards correctly, which I presume is a result of the calculation of my position which is done like this:
triangle.x = -(width / 2) + (stage.width / 2) + radius * sin((index / total) * (2 * PI))
Here is an example of what happens, as you can see the last few triangles in the circle appear to be facing outwards correctly.
OK, I need some answer space to put all this info.
First of all you need to calculate the angle of a given triangle. You can do that with the following:
int angle = (360 / numberOfElements) * triangleIndex;
You also need to work out a "slice" (don't no what that is, just read it) to use for calculating the new positon:
var slice = (2 * Math.PI / numberOfElements) * triangleIndex;
Next, you need to work out the position of each triangle:
int tempRadius = radius + (int)(triangleHeight / 2);
int traingleCentreX = (int)(centre.X + tempRadius * Math.Cos(slice));
int traingleCentreY = (int)(centre.Y + tempRadius * Math.Sin(slice));
//assuming centre is the centre of the circle
[Credit for all this maths goes to this answer
]
Now that you have the correct position of each of your triangles, you should be able to apply the rotation (using angle) and it should look amaze-balls!
NOTE: Positions will be calculating starting at the right (i.e. 90 degrees). So when doing the rotation add an extra 90 degrees!
http://jsfiddle.net/TcENr/ (it as the quickest to test!)
The issue with the subtle offset of the rotation was because I wasn't adding the half width and height of the triangle to it's position, this fixed the problem:
rotation = atan2(centreY-(triangleY+triangleHalfHeight),centreX-(triangleX+triangleHalfWidth)) * 180 / Math.PI - 90;
I'm playing a bit with D3.js and I got most things working. But I want to place my svg shapes in a circle. So I will show the difference in data with color and text. I know how to draw circles and pie charts, but I want to basically have a circle of same size circles. And not have them overlap, the order is irrelevant. I don't know where to start, to find out the x & y for each circle.
If I understand you correctly, this is a fairly standard math question:
Simply loop over some angle variable in the appropriate step size and use sin() and cos() to calculate your x and y values.
For example:
Let's say you are trying to place 3 objects. There are 360 degrees in a circle. So each object is 120 degrees away from the next. If your objects are 20x20 pixels in size, place them at the following locations:
x1 = sin( 0 * pi()/180) * r + xc - 10; y1 = cos( 0 * pi()/180) * r + yc - 10
x2 = sin(120 * pi()/180) * r + xc - 10; y2 = cos(120 * pi()/180) * r + yc - 10
x3 = sin(240 * pi()/180) * r + xc - 10; y3 = cos(240 * pi()/180) * r + yc - 10
Here, r is the radius of the circle and (xc, yc) are the coordinates of the circle's center point. The -10's make sure that the objects have their center (rather than their top left corner) on the circle. The * pi()/180 converts the degrees to radians, which is the unit most implementations of sin() and cos() require.
Note: This places the shapes equally distributed around the circle. To make sure they don't overlap, you have to pick your r big enough. If the objects have simple and identical boundaries, just lay out 10 of them and figure out the radius you need and then, if you need to place 20, make the radius twice as big, for 30 three times as big and so forth. If the objects are irregularly shaped and you want to place them in the optimal order around the circle to find the smallest circle possible, this problem will get extremely messy. Maybe there's a library for this, but I don't have one in the top of my head and since I haven't used D3.js, I'm not sure whether it will provide you with this functionality either.
Here's another approach to this, for shapes of arbitrary size, using D3's tree layout: http://jsfiddle.net/nrabinowitz/5CfGG/
The tree layout (docs, example) will figure out the x,y placement of each item for you, based on a given radius and a function returning the separation between the centers of any two items. In this example, I used circles of varying sizes, so the separation between them is a function of their radii:
var tree = d3.layout.tree()
.size([360, radius])
.separation(function(a, b) {
return radiusScale(a.size) + radiusScale(b.size);
});
Using the D3 tree layout solves the first problem, laying out the items in a circle. The second problem, as #Markus notes, is how to calculate the right radius for the circle. I've taken a slightly rough approach here, for the sake of expediency: I estimate the circumference of the circle as the sum of the diameters of the various items, with a given padding in between, then calculate radius from the circumference:
var roughCircumference = d3.sum(data.map(radiusScale)) * 2 +
padding * (data.length - 1),
radius = roughCircumference / (Math.PI * 2);
The circumference here isn't exact, and this will be less and less accurate the fewer items you have in the circle, but it's close enough for this purpose.