I’m new to three (and, in a way, to 3d in general…) and I need to do something that appear to be way more complicated than I thought… I have two needs :
Draw a polygon from a set of points
Extrude that polygon
As an input, I have a set of points in 3d (x, y and z) defining the outline of the polygon and the “depth” of the extrusion. Each point can be anywhere in x/y/z, all aligned on a “plane” (that plane being rotated/moved in space by unknown values beforehand).
I thought it would be relatively easy to create this polygon “in the air” and extrude it in the directions of the polygon’s normals, but I can’t find out how to do it.
As an example, this is a (simple) possible input for my use case. Note that I can have more than 3 points :
var polygon = {
"points":[
[0, 0, 0],
[0, 10, 10],
[10, 0, 10]
],
"extrude":5
}
As I understand it, Three can only create 2d polygons. I thought of “triangulating” my points from their final 3d position to a 2d plane (and then rotate this “plane” so the points go back to their original position), but I’m not quite sure how to do that and I fear that rounding errors might move my points a bit… Is there a better way? Can someone push me in the right direction?
Related
Apologies in advance for my feeble maths.
I'm trying to be able to find the corners of a plane in space based on the equation of that plane. Here's what I know. I know three points on the plane and I know where they fall in the 2d coordinate space of the plane (x,y) and where they are in 3d space. I know the width and height of the plane and I can now calculate the equation of the plane. The plane sits on the inside of a large sphere that surrounds the origin so, in theory, it should more or less face where the camera is (though in my diagram it doesn't face the origin as it's just for illustrative purposes)
But it's not clear to me how I can use that to figure out another point. One thought I had was to find the transform that moves the plane parallel to the xy axis and rotate it round one of the points (so it stays in the same place), find the position of the new point, and then rotate it by the inverse of that transform. But it's not clear to me how I would find that transform matrix or how to use it. Could I do this using the normal and vector maths? I understand what normals are, but I'm fuzzy about how to use them.
Example http://bl.ocks.org/mbostock/5731693 shows the equator drawn on an equirectangular map, which then transitions to another projection. On the equirectangular projection, the equator is just a straight line running horizontally.
Funny things happen though if I move the parallel to another latitude, say 20⁰. Instead of drawing a straight line 20⁰ above the equator, D3 draws curved segments approaching the 30⁰ parallel midway from the given control points.
Since I'm just starting with D3, I am a bit at loss at what is happening here.
It looks like each point is connected with a great arc of the sphere they are placed on. Great arcs are defined as sections of great circles, which in turn are circles that divide a sphere in half.
Great arcs are the shortest paths that connect two points on a sphere. They are the equivalent of going in a strait line across the surface of a sphere. On the equator, which is a great circle (since it divides the globe into the north and south hemispheres), these great arcs are part of the equator forming what looks like a straight line on an equirectanglar projection. Since the 20⁰ latitude line is not a great circle, the most direct path between any two points on it will not lie on the line, and will instead be arcs that lie to the north of the 20⁰ latitude line.
For a great circle, you can use the coordinates [[-180, 20], [-90, 0], [0, -20], [90, 0], [180, 20]]
How would I project 2D coordinates on to 3D?
For example I have an image (represented as a particle) that is 256px wide. If I pretend this image is centered on the origin (0,0) in 2D space then the vertical sides of the square are located at x = 128 and x = -128.
So when this image(particle) is placed in a Three.js scene at the origin(0,0) and the camera is at CamZ distance from the origin, then how do I project the original 2D coordinates to 3D which in tern will tell me the width the image(particle!) appears on screen in three.js units.
An easy to understand way would be creating a geometry with vertices in -128, 128, 0, 128, 128, 0, -128, -128, 0 and 128, -128, 0. Then use Projector for projecting that geometry using the camera. It will give you an array of projected points that should be from -1 to 1. You'll then need to multiply that to the viewport size.
There is another way to do it. The exact translation between 2D and 3D can be approximated heuristically. Often it is more difficult to implement than using three.js to project a vector, but with an exponential translation you can map many 2D/3D translations.
The idea here is to use an Exponential easing function to calculate the translation. The easing functions that most libraries use (such as jQuery UI) are the Robert Penner Easing functions.
The easeOutExpo function works surprisingly well at approximating 2D/3D translations. Generally it would look something like this:
// easeOutExpo(time, base, change, duration)
var xPosition3D = xPosition2D * expo(xPosition2D, 0, coefficient, xMax2D);
It takes a co-efficient, and the exact number will depend on the aspect ratio and focal length of the 3D camera. Usually, something like -.2 works well.
I know this is an insufficient explanation, but hopefully it points you in the right direction.
Context: I'm trying to clip a topographic map into the minimum-size ellipse around a number of wind turbines, to minimize the size of the map. The program doing this map clipping can clip in ellipses, but only ellipses with axes aligned along the x and y axes.
I know the algorithm for the bounding ellipse problem (finding the smallest-area ellipse that encloses a set of points).
But how do I constrain this algorithm (or make a different algorithm) such that the resulting ellipse is required to have its major axis oriented either horizontally or vertically, whichever gives the smallest ellipse -- and never at an angle?
Of course, this constraint makes the resulting ellipse larger than it "needs" to be to enclose all the points, but that's the constraint nonetheless.
The algorithm described here (referenced in the link you provided) is about solving the following optimization problem:
minimize log(det(A))
s.t. (P_i - c)'*A*(P_i - c)<= 1
One can extend this system of inequalities with the following constraint (V is the ellipse rotation matrix, for detailed info refer the link above):
V == [[1, 0], [0, 1]] // horizontal ellipse
or
V == [[0, -1], [1, 0]] // vertical ellipse
Solving the optimization problem with either of these constraints and calculating the square of the resulting ellipses will give you the required result.
I have an image of a 3D rectangle (which due to the projection distortion is not a rectangle in the image). I know the all world and image coordinates of all corners of this rectangle.
What I need is to determine the world coordinate of a point in the image inside this rectangle. To do that I need to compute a transformation to unproject that rectangle to a 2D rectangle.
How can I compute that transform?
Thanks in advance
This is a special case of finding mappings between quadrilaterals that preserve straight lines. These are generally called homographic transforms. Here, one of the quads is a rectangle, so this is a popular special case. You can google these terms ("quad to quad", etc) to find explanations and code, but here are some sites for you.
Perspective Transform Estimation
a gaming forum discussion
extracting a quadrilateral image to a rectangle
Projective Warping & Mapping
ProjectiveMappings for ImageWarping by Paul Heckbert.
The math isn't particularly pleasant, but it isn't that hard either. You can also find some code from one of the above links.
If I understand you correctly, you have a 2D point in the projection of the rectangle, and you know the 3D (world) and 2D (image) coordinates of all four corners of the rectangle. The goal is to find the 3D coordinates of the unique point on the interior of the (3D, world) rectangle which projects to the given point.
(Do steps 1-3 below for both the 3D (world) coordinates, and the 2D (image) coordinates of the rectangle.)
Identify (any) one corner of the rectangle as its "origin", and call it "A", which we will treat as a vector.
Label the other vertices B, C, D, in order, so that C is diagonally opposite A.
Calculate the vectors v=AB and w=AD. These form nice local coordinates for points in the rectangle. Points in the rectangle will be of the form A+rv+sw, where r, s, are real numbers in the range [0,1]. This fact is true in world coordinates and in image coordinates. In world coordinates, v and w are orthogonal, but in image coordinates, they are not. That's ok.
Working in image coordinates, from the point (x,y) in the image of your rectangle, calculate the values of r and s. This can be done by linear algebra on the vector equations (x,y) = A+rv+sw, where only r and s are unknown. It will boil down to a 2x2 matrix equation, which you can solve generally in code using Cramer's rule. (This step will break if the determinant of the required matrix is zero. This corresponds to the case where the rectangle is seen edge-on. The solution isn't unique in that case. If that's possible, make special exception.)
Using the values of r and s from 4, compute A+rv+sw using the vectors A, v, w, for world coordinates. That's the world point on the rectangle.