Three.js How to apply raycast tolerance - three.js

That is, how to figure out intersections using not only the mouse position but a rectangular 3 x 3 or 5 x 5 area ? Have I to create these 8 or 16 xtra rays and calculate intersections for all objects? Could be this a Three feature request ?
I want some degree of freedom when I want to pick an object.
Thanks.

Related

What is the maximum x-axis range of acquired depth data in Google Project Tango?

I need to divide the points based on their x-position, so that there is, for example, three divisions of points (a middle, left, and right). The middle one should have a range of one meter. Thus, I was wondering what is the min/max ranges of the x-axis? is it large enough to add more divisions than three with same range (1 meter) ?
Thanks
I'm not sure if your question is very precise.
The x and y positions of the depth data will depend in the actual depth of the image. In particular, it will depend on the depth and the angle of the camera. If the wall in front of the camera is very close, there will be less x-axis range.
As an example. For a depth data with an average z-range of 1.5, I get a x-range around [-0.8,0.8]. For another frame, the average z-range is 3.0, the range goes to [-1.6, 1.6]. Of course these numbers depend on the scene itself, it was just to give you a little idea.
Is it clearer now?
If you check the Horizontal field of View up to this equation
Horizontal FOV = 2 * atan(0.5 * width / Fx)
https://developers.google.com/tango/overview/intrinsics-extrinsics
In the Tango yellowstone is about 63 degrees. So it means that you have 31 degrees to left and 31 degrees to right.
Now, if you have pointcloudData based on xyz you can know that if z = 1 meter then you could apply trigonometry

Calculating the rotation vector of a sphere

I'm trying to calculate the axis of rotation of a ball which is moving and spinning at the same time, i.e. I want the vector along the axis that the ball is spinning on.
For every frame I know the x, y and z locations of 3 specific points on the surface of the sphere. I assume that by looking at how these 3 points have moved in successive frames, you can calculate the axis of rotation of the ball, however I have very little experience with this kind of maths, any help would be appreciated!
You could use the fact that the direction a position vector moves in will always be perpendicular to the axis of rotation. Therefore, if you have two position vectors v1 and v2 at successive times (for the same point), use
This gives you an equation with three unknowns (the components of w, the rotation axis). If you then plug in all three points you have knowledge of you should be able to solve these simultaneous equations and work out w.

How to rotate orientation in "Fast Approximated SIFT"?

The paper "Fast Approximated SIFT" (M Grabner, H Grabner, ACCV 2006)
http://www.icg.tu-graz.ac.at/publications/pubobjects/mgrabner06FastApproxSIFT
shows an improved method to extract SIFT descriptors from image using integral histograms.
It says "for the descriptor we rotate the midpoints of each sub-patch relative to the orientation and compute the histograms of overlapping sub-patches without aligning the squared region but shifting the sub-patch histogram relative to the main orientation."
In this paper, the histogram of the 4*4 sub-patches around the keypoint can be computed easily using integral histogram. However, the result histograms are not rotated with orientation of the keypoint. The conventional SIFT needs every pixel in the sub-patches to be rotated with an orientation, then compute the histogram. But it seems this new method in the paper can make the rotation after getting the non-rotated histogram by "shifting the sub-patch histogram relative to the main orientation". I do not understand how to "shifting the sub-patch histogram relative to the main orientation"?
I quote here:"for the descriptor we rotate the midpoints of each sub-patch relative to the orientation and compute the histograms of overlapping sub-patches without aligning the squared region but shifting the sub-patch histogram relative to the main orientation."
For example if a non-rotated sub-patch histogram has 8 bins from 0 to 2pi, with an interval pi/4, each bin's value 2,4,5,3,6,8,7,1, and the orientation of keypoint is pi/6, how to know the new value of 8 bins in the rotated histogram?
As far as I understand it: They round the orientation to the next Pi/4 interval. That way you can just rotate the entire array and
2 4 5 3 6 8 7 1 becomes
_ 4 5 3 6 8 7 1 2 which represents the histogram of the rotated patch.

Intersect picked Ray with shapes in OpenGL

I am trying to perform picking in OpenGL, and have 3 questions in 1.
I use twice the Unproject command, once with 0 and once with 1 as near/end planes.
Some article say that 0 and 1 are ok, some others say that I should use a calculated depth. Which one should I take ?
Then, assuming I could substract both results, that gives me a ray (the ray is going from my "camera" to the direction indicated as x,y,z, right ? x,y,z are absolute values or relative to my "camera" ?
Now that I have the ray, how can I intersect it with shapes ? By the way, how can I list existing shapes and calculate their coordinates vs the ray ?
2 - Your ray will be relative to the camera, just multiply it by the inverse camera transform.
3 - For just about all purposes, you need a spatial subdivision algorithm (Binary Space Partition, Bounding Volume Hierarchy, etc.) And you should maintain a list of the shapes you have created...

How to generate a subdivided icosahedron?

I've asked some questions here and seen this geometric shape mentioned a few times among other geodesic shapes, but I'm curious how exactly would I generate one about a point xyz?
There's a tutorial here.
The essential idea is to start with an icosahedron (which has 20 triangular faces) and to repeatedly subdivide each triangular face into smaller triangles. At each stage, each new point is shifted radially so it is the correct distance from the centre.
The number of stages will determine how many triangles are generated and hence how close the resulting mesh will be to a sphere.
Here is one reference that I've used for subdivided icosahedrons, based on the OpenGL Red Book. The BSD-licensed source code to my iPhone application Molecules contains code for generating simple icosahedrons and loading them into a vertex buffer object for OpenGL ES. I haven't yet incorporated subdivision to improve the quality of the rendering, but it's in my plans.
To tesselate a sphere, most people sub-divide the points linearly, but that does not produce a rounded shape.
For a rounded tesselation, rotate the two points through a series of rotations.
Rotate the second point around z (by the z angle of point 1) to 0
Rotate the second point around y (by the y angle of point 1) to 0 (this logically puts point 1 at the north pole).
Rotate the second point around z to 0 (this logically puts point 1 on the x/y plane, which now becomes a unit circle).
Find the half-angle, compute x and y for the new 3rd point, point 3.
Perform the counter-rotations in the reverse order for steps 3), 2) and 1) to position the 3rd point to its destination.
There are also some mathematical considerations for values near each of the near-0 locations, such as the north and south pole, and the right-most and left-most, and fore-most and aft-most positions, so check those first and perform an additional rotation by pi/4 (45 degrees) if they're at those locations. This prevents floating point math libraries from freaking out and producing wildly out-of-character values for atan2() and other trig functions.
Hope this helps! :-)

Resources