Related
(More info at end)----->
I am trying to render a small picture-in-picture display over my scene. The PiP is just a smaller texture, but it is intended to reveal secret objects in the scene when it is placed over them.
To do this, I want to render my scene, then render the SAME scene on the smaller texture, but with the exact same positioning as the main scene. The intended result would be something like this:
My problem is... I cannot get the scene on the smaller texture to match up 1:1. I keep trying various kludges, but ultimately I suspect that I need to do something to the projection matrix to pan it over to the location of the frame. I can get it to zoom correctly...just can't get it to pan.
Can anyone suggest what I need to do to my projection matrix to render my scene 1:1 (but panned by x,y) onto a smaller texture?
The data I have:
Resolution of the full-screen framebuffer
Resolution of the smaller texture
XY coordinate where I want to draw the smaller texture as an overlay sprite
The world/view/projection matrices from the original full-screen scene
The viewport from the original full-screen scene
(Edit)
Here is the function I use to produce the 3D camera:
void Make3DCamera(Vector theCameraPos, Vector theLookAt, Vector theUpVector, float theFOV, Point theRez, Matrix& theViewMatrix,Matrix& theProjectionMatrix)
{
Matrix aCombinedViewMatrix;
Matrix aViewMatrix;
aCombinedViewMatrix.Scale(1,1,-1);
theCameraPos.mZ*=-1;
theLookAt.mZ*=-1;
theUpVector.mZ*=-1;
aCombinedViewMatrix.Translate(-theCameraPos);
Vector aLookAtVector=theLookAt-theCameraPos;
Vector aSideVector=theUpVector.Cross(aLookAtVector);
theUpVector=aLookAtVector.Cross(aSideVector);
aLookAtVector.Normalize();
aSideVector.Normalize();
theUpVector.Normalize();
aViewMatrix.mData.m[0][0] = -aSideVector.mX;
aViewMatrix.mData.m[1][0] = -aSideVector.mY;
aViewMatrix.mData.m[2][0] = -aSideVector.mZ;
aViewMatrix.mData.m[3][0] = 0;
aViewMatrix.mData.m[0][1] = -theUpVector.mX;
aViewMatrix.mData.m[1][1] = -theUpVector.mY;
aViewMatrix.mData.m[2][1] = -theUpVector.mZ;
aViewMatrix.mData.m[3][1] = 0;
aViewMatrix.mData.m[0][2] = aLookAtVector.mX;
aViewMatrix.mData.m[1][2] = aLookAtVector.mY;
aViewMatrix.mData.m[2][2] = aLookAtVector.mZ;
aViewMatrix.mData.m[3][2] = 0;
aViewMatrix.mData.m[0][3] = 0;
aViewMatrix.mData.m[1][3] = 0;
aViewMatrix.mData.m[2][3] = 0;
aViewMatrix.mData.m[3][3] = 1;
if (gG.mRenderToSprite) aViewMatrix.Scale(1,-1,1);
aCombinedViewMatrix*=aViewMatrix;
// Projection Matrix
float aAspect = (float) theRez.mX / (float) theRez.mY;
float aNear = gG.mZRange.mData1;
float aFar = gG.mZRange.mData2;
float aWidth = gMath.Cos(theFOV / 2.0f);
float aHeight = gMath.Cos(theFOV / 2.0f);
if (aAspect > 1.0) aWidth /= aAspect;
else aHeight *= aAspect;
float s = gMath.Sin(theFOV / 2.0f);
float d = 1.0f - aNear / aFar;
Matrix aPerspectiveMatrix;
aPerspectiveMatrix.mData.m[0][0] = aWidth;
aPerspectiveMatrix.mData.m[1][0] = 0;
aPerspectiveMatrix.mData.m[2][0] = gG.m3DOffset.mX/theRez.mX/2;
aPerspectiveMatrix.mData.m[3][0] = 0;
aPerspectiveMatrix.mData.m[0][1] = 0;
aPerspectiveMatrix.mData.m[1][1] = aHeight;
aPerspectiveMatrix.mData.m[2][1] = gG.m3DOffset.mY/theRez.mY/2;
aPerspectiveMatrix.mData.m[3][1] = 0;
aPerspectiveMatrix.mData.m[0][2] = 0;
aPerspectiveMatrix.mData.m[1][2] = 0;
aPerspectiveMatrix.mData.m[2][2] = s / d;
aPerspectiveMatrix.mData.m[3][2] = -(s * aNear / d);
aPerspectiveMatrix.mData.m[0][3] = 0;
aPerspectiveMatrix.mData.m[1][3] = 0;
aPerspectiveMatrix.mData.m[2][3] = s;
aPerspectiveMatrix.mData.m[3][3] = 0;
theViewMatrix=aCombinedViewMatrix;
theProjectionMatrix=aPerspectiveMatrix;
}
Edit to add more information:
Just playing and tweaking numbers, I have come to a "close" result. However the "close" result requires a multiplication by some kludge numbers, that I don't understand.
Here's what I'm doing to to perspective matrix to produce my close result:
//Before calling Make3DCamera, adjusting FOV:
aFOV*=smallerTexture.HeightF()/normalRenderSize.HeightF(); // Zoom it
aFOV*=1.02f // <- WTH is this?
//Then, to pan the camera over to the x/y position I want, I do:
Matrix aPM=GetCurrentProjectionMatrix();
float aX=(screenX-normalRenderSize.WidthF()/2.0f)/2.0f;
float aY=(screenY-normalRenderSize.HeightF()/2.0f)/2.0f;
aX*=1.07f; // <- WTH is this?
aY*=1.07f; // <- WTH is this?
aPM.mData.m[2][0]=-aX/normalRenderSize.HeightF();
aPM.mData.m[2][1]=-aY/normalRenderSize.HeightF();
SetCurrentProjectionMatrix(aPM);
When I do this, my new picture is VERY close... but not exactly perfect-- the small render tends to drift away from "center" the further the "magic window" is from the center. Without the kludge number, the drift away from center with the magic window is very pronounced.
The kludge numbers 1.02f for zoom and 1.07 for pan reduce the inaccuracies and drift to a fraction of a pixel, but those numbers must be a ratio from somewhere, right? They work at ANY RESOLUTION, though-- so I have have a 1280x800 screen and a 256,256 magic window texture... if I change the screen to 1024x768, it all still works.
Where the heck are these numbers coming from?
If you don't care about sub-optimal performance (i.e., drawing the whole scene twice) and if you don't need the smaller scene in a texture, an easy way to obtain the overlay with pixel perfect precision is:
Set up main scene (model/view/projection matrices, etc.) and draw it as you are now.
Use glScissor to set the rectangle for the overlay. glScissor takes the screen-space x, y, width, and height and discards anything outside that rectangle. It looks like you have those four data items already, so you should be good to go.
Call glEnable(GL_SCISSOR_TEST) to actually turn on the test.
Set the shader variables (if you're using shaders) for drawing the greyscale scene/hidden objects/etc. You still use the same view and projection matrices that you used for the main scene.
Draw the greyscale scene/hidden objects/etc.
Call glDisable(GL_SCISSOR_TEST) so you won't be scissoring at the start of the next frame.
Draw the red overlay border, if desired.
Now, if you actually need the overlay in its own texture for some reason, this probably won't be adequate...it could be made to work either with framebuffer objects and/or pixel readback, but this would be less efficient.
Most people completely overcomplicate such issues. There is absolutely no magic to applying transformations after applying the projection matrix.
If you have a projection matrix P (and I'm assuming default OpenGL conventions here where P is constructed in a way that the vector is post-multiplied to the matrix, so for an eye space vector v_eye, we get v_clip = P * v_eye), you can simply pre-multiply some other translate and scale transforms to cut out any region of interest.
Assume you have a viewport of size w_view * h_view pixels, and you want to find a projection matrix which renders only a tile w_tile * h_tile pixels , beginning at pixel location (x_tile, y_tile) (again, assuming default GL conventions here, window space origin is bottom left, so y_tile is measured from the bottom). Also note that the _tile coordinates are to be interpreted relative to the viewport, in the typical case, that would start at (0,0) and have the size of your full framebuffer, but this is by no means required nor assumed here.
Since after applying the projection matrix we are in clip space, we need to transform our coordinates from window space pixels to clip space. Note that clip space is a 4D homogeneous space, but we can use any w value we like (except 0) to represent any point (as a point in the 3D space we care about forms a line in the 4D space we work in), so let's just use w=1 for simplicity's sake.
The view volume in clip space is denoted by the [-w,w] range, so in the w=1 hyperplane, it is [-1,1]. Converting our tile into this space yields:
x_clip = 2 * (x_tile / w_view) -1
y_clip = 2 * (y_tile / h_view) -1
w_clip = 2 * (w_tile / w_view) -1
h_clip = 2 * (h_tile / h_view) -1
We now just need to translate the objects such that the center of the tile is moved to the center of the view volume, which by definition is the origin, and scale the w_clip * h_clip sized region to the full [-1,1] extent in each dimension.
That means:
T = translate(-(x_clip + 0.5*w_clip), -(y_clip + 0.5 *h_clip), 0)
S = scale(2.0/w_clip, 2.0/h_clip, 1.0)
We can now create the modified projection matrix P' as P' = S * T * P, and that's all there is. Rendering with P' instead of P will render exactly the region of your tile to whatever viewport you are using, so for it to be pixel-exact with respect to your original viewport, you must now render with a viewport which is also w_tile * h_tile pixels big.
Note that there is also another approach: The viewport is not clamped against the framebuffer you're rendering to. It is actually valid to provide negative values for x and y. If your framebuffer for rendering your tile into is exactly w_tile * h_tile pixels, you simply could set glViewport(-x_tile, -y_tile, x_tile + w_tile, y_tile + h_tile) and render with the unmodified projection matrix P instead.
Given the two following inputs:
a point on a sphere (like an observer on Earth);
and the world matrix of an object in space (the position and attitude of a satellite),
how to get the azimuth and elevation of the object in the tangent space of the point on the sphere (the elevation and azimuth of where the observer should look at)? In particular, when the object is exactly at the zenith, the yaw rotation (rotation around the vertical axis) should account for the azimuth (so that, though the observer is looking straight up, his shoulders would be facing the same azimuth as the object).
The math I've tried so far is:
to put the satellite in tangent space (multiplying its world matrix with the inverse of the matrix of the tangent space on the globe). Or the same with quaternions. An euler rotation is then deduced from the resulting matrix (or the resulting quaternion), with a "ZXY" priority, and the Z and X are interpreted as azimuth and elevation. But this gives incorrect numbers, as part of the rotation seems often interpreted as roll (Y axis rotation) which I want to be zero.
an intuitive approach also is to compute the angle between the vector of the observer to the object's position, with the vertical axis, to deduce the elevation; whereas the azimuth is given by the angle between the tangent north and the projected position of the object on the "tangent ground" (plus some more math to hone this particular deduction). But this approach does not work for the case of the object at the zenith.
Resources exist online but not with these specific inputs and the necessity of supporting the zenith case.
Incidentally the program is in typescript for three.js, and so the code goes as follows for the first solution described above:
function getRotationAtPoint(
object: THREE.Object3D,
point: THREE.Vector3
): { azimuth: number, elevation: number } {
// 1. Get the matrix of the tangent space of the observer.
const tangentSpaceMatrix = new THREE.Matrix4();
const baseTangentSpaceAxes = getBaseTangentAxesOnSphere(point);
tangentSpaceMatrix.makeBasis(...baseTangentSpaceAxes);
// 2. Tranform the object's matrix in tangent space of observer.
const inverseMatrix = new THREE.Matrix4().getInverse(tangentSpaceMatrix);
const objectMatrix = object.matrixWorld.clone().multiply(inverseMatrix);
// 3. Get the angles.
const euler = new THREE.Euler().setFromRotationMatrix(objectMatrix);
return {
azimuth: euler.z,
elevation: euler.x
};
}
Also, Three.js offers references to the up axis of THREE.Object3D instances, however the program I deal with computes everything directly into the objects' matrices and the up axis can't be trusted.
I am unsure of what math to use here, as I am very inexperienced in the area of using math along with coding to solve problems such as this, so I was wondering if anyone here could either give me some pointers or give me somewhere to look at this specific problem. I have the x and y trajectory of the single point (or ball) and when it moves, it acts like a line, moving from one place in which is has to stop, then bouncing off of it (reflecting the trajectory), and going in that bounced trajectory. I just need the algorithm to give a true/false (boolean) to whether the current slope will come in contact with the rectangle. I have the 4 edge points of the rectangle and the middle point of the rectangle if needed.
After some back-and-forth in the comments, here is a function that may better suit OP's purpose. The is_intersect() function takes as input the position of the point on the trajectory, its direction, and a quadrilateral, and returns true if the point is on an intersecting trajectory, false otherwise.
pos is a table containing the position of the point, of the form:
pos = { x=x1, y=y1 }
dir is a number containing a positive angle in radians (0 <= θ < 2π) with respect to the positive x-axis, representing the direction of travel of the point.
quad is a table representing a quadrilateral, of the form:
quad = {{x=x1, y=y1}, {x=x2, y=y2}, {x=x3, y=y3}, {x=x4, y=y4}}
Note that it would be a simple matter, and perhaps desirable, to adapt the code to use integer-indexed tables instead, such as pos = {x1, y1} and quad = {{x1, y1}, {x2, y2},...}.
This function works for quadrilaterals situated anywhere in the plane, and for trajectory points situated anywhere in the plane. It works by finding the positive angles with respect to the positive x-axis of a line through the trajectory point and each of the corners of the quadrilateral. The function returns true if the direction angle is in this range.
function is_intersect(pos, dir, quad)
local theta_corner, theta_min, theta_max
for i = 1, 4 do
local x, y = quad[i].x, quad[i].y
-- Find angle of line from pos to corner
theta_corner = math.atan((y - pos.y) / (x - pos.x))
-- Adjust angle to quadrant
if x < pos.x then
theta_corner = theta_corner + math.pi
elseif y < pos.y then
theta_corner = theta_corner + 2*math.pi
end
-- Keep max and min angles
if (not theta_min) or theta_corner < theta_min then
theta_min = theta_corner
end
if (not theta_max) or theta_corner > theta_max then
theta_max = theta_corner
end
end
-- Compare direction angle with max and min angles
return theta_min <= dir and dir <= theta_max
end
Here is a sample interaction:
> test = {{x = 1, y = 1}, {x = 1, y = 2}, {x = 2, y = 2}, {x = 2, y = 1}}
> position = {x = 3, y = 3}
> pi = math.pi
> is_intersect(position, 5*pi/4, test)
true
> angle = math.atan(.5)
> -- checking corners
> is_intersect(position, pi + angle, test)
true
> is_intersect(position, 3*pi/2 - angle, test)
true
> -- checking slightly inside corners
> is_intersect(position, 1.01*(pi + angle), test)
true
> is_intersect(position, .99*(3*pi/2 - angle), test)
true
> -- checking slightly outside corners
> is_intersect(position, .99*(pi + angle), test)
false
> is_intersect(position, 1.01*(3*pi/2 - angle), test)
false
How it Works
This section is for OP's benefit. Read no further if you do not want a Trigonometry refresher.
In this diagram, there are two points with directions represented by a green and a yellow arrow. The red lines connect the points with the corners of the rectangle. The is_intersect() function works by calculating the angles, measured from the positive x-axis, as in the diagram, to the lines connecting the point to each of the corners of the rectangle. You can see that there will be four such lines for each point, but only two are marked in the diagram. One of these is the largest such angle, and the other is the smallest. The direction of travel for the point is specified by an angle, also measured from the positive x-axis. If this angle is between the angles to the two red lines, then the point is on an intersecting trajectory with the rectangle. The green point is on an intersecting trajectory: the angle from the positive x-axis to the line of travel for the green point (what we might call its velocity vector) is between the other two angles for this point. But the yellow point is not on an intersecting trajectory: the angle from the positive x-axis to the line of travel for the yellow point is larger than both of the other two angles.
Now, in this diagram there are four triangles. Each of the triangles has an angle at the origin of the coordinate system. We define the tangent of this angle to be the ratio of the length of the side opposite the angle (the vertical leg of the triangle) to the length of the side adjacent to the angle (the horizontal leg). That is:
tan(A) = y/x
Furthermore, the angle A is the arctangent of y/x:
A = atan(y/x)
From this diagram, it can be seen that to find the direction angle of the line connecting the point on the trajectory to the corner of the rectangle, we can calculate the angle A from the triangle, and add 270°. We really add 3π/2 radians. For a variety of reasons which I will not go into now, radians are better. Once you get used to them, you will never use degrees for any sort of calculations ever again. If you ever study Calculus, you will have to use radians. But the practical issue at the moment is that Lua's trig functions take arguments in radians, and report angles in radians.
So, the angle A is atan(x/y). How to find x and y? By subtracting the value of the x-coordinate for the point from the x-coordinate of the rectangle corner we can find x. Similarly, by subtracting the value of the y-coordinate of the point from the y-coordinate of the rectangle corner, we can find y.
We still need to know which quadrant the angle is in so that we know how much to add to A. The quadrants are traditionally labelled with the Roman numerals from Ⅰ to Ⅳ, starting from the upper right quadrant of the x-y axis. The angle in the diagram is in quadrant Ⅳ. We can tell which quadrant the angle is in by checking to see if the point is to the left or right of the corner of the rectangle, and above or below the corner of the rectangle.
With this information, the direction angles to each of the corners from the point can be found. The largest and smallest are kept and compared with the direction angle for the trajectory of the point to see if the trajectory will intersect the rectangle.
The above exposition is a pretty good description of what the code in the is_intersect() function is doing. There is a subtlety in that the subtraction to find the sides of the triangles gives negative side-lengths in some cases. This is a normal issue in trigonometric calculations; the code needs to know how the atan() function being used handles negative values. The code under the comment -- Adjust angle to quadrant takes this into account.
For the case of a coordinate system with the origin at the upper left, you simply need to measure angles in a clockwise fashion instead. This is analagous to flipping the original coordinate system upside down. By the way, the original coordinate system (with the origin at the lower-left) is the one usually found in Mathematics, and is called a right-handed coordinate system. The other system, with the origin at the upper-left, is called a left-handed coordinate system.
Addendum
In an attempt to help OP understand these functions, I provided this simple testing code. What follows is a further attempt to explicate these tests.
The above diagram shows in part the situation represented in the tests. There is a 1X1 rectangle located in quadrant I. Note that these are screen coordinates, with the origin at the upper left. All angles are measured in a clockwise direction from the positive x-axis.
In the tests you will find:
pos1 = { x = 0, y = 0 }
pos3 = { x = 3, y = 3 }
These are two positions from which we wish to test the functions. The first position is at the origin, the second is at the location (3, 3). The red lines in the diagram help us to see the direction angles from the test points to the corners of the rectangle.
First note that A = atan(1 / 2), or A = atan(.5). This angle will show up again, so I have defined the constants:
angle = math.atan(.5)
pi = math.pi
Next notice that the angle B is (π/2 - A) radians. Convince yourself that this is true by looking at the symmetry of the situation. There is no need to convert any of these angles to degrees, in fact do not do this! This will only cause problems; the usual trig functions expect arguments in radians, and return angles in radians.
Continuing, if you look closely you should be able to see that the angle C is (π + A) radians, and the angle D is (3π/2 - A) radians.
So we have, for example, these tests:
{ pass = true, args = { pos1, pi/4, rect } },
{ pass = true, args = { pos3, 5*pi/4, rect } },
The first of these tests the trajectory from position pos1, (0, 0), with a direction angle of π/4 radians. This should intersect the corner nearest pos1. The second tests the trajectory from position pos3, (3, 3), with a direction angle of 5π/4 radians. This should intersect the corner nearest pos3. So both tests should return true.
There are also tests like:
{ pass = true, args = { pos3, pi + angle, rect } },
{ pass = true, args = { pos3, 3*pi/2 - angle, rect } },
{ pass = true, args = { pos3, 1.1*(pi + angle), rect} },
{ pass = false, args = { pos3, 1.1*(3*pi/2 - angle), rect } },
{ pass = false, args = { pos3, .99*(pi + angle), rect } },
{ pass = true, args = { pos3, .99*(3*pi/2 - angle), rect } },
Now pi + angle and 3*pi/2 - angle are the direction angles to the outside corners of the rectangle from pos3, as previously noted. So the first two tests should return true, since we expect trajectories directed at the corners to intersect the rectangle. The remaining tests make the direction angles larger or smaller. Looking back at the diagram, note that if the direction angle of the trajectory from pos3 is made larger than C, there should be an intersection, but if it is made smaller than C, there should not be an intersection. Similarly, if the direction angle of the trajectory from pos3 is made larger than D, there should be no intersection, while if it is is made smaller, there should be an intersection.
All of this is just to explain what is happening in the tests, so that you can understand them and write tests of your own. In practice, you only need to specify the quadrilateral with the quad table, a position with the pos table, and a direction with dir, which is a number in radians. You specified straight-line motion; curvilinear motion is more involved.
After all of this, I still strongly suggest, as I did earlier in the comments, that you consider using #Gene's function. To use this function you specify a direction vector instead of an angle, using a table: dir = {x = x1, y = y1). This will save you many troubles, and as you mentioned earlier, you already have x- and y-components for the direction of the trajectory. Converting this information to an angle will require you to understand how to make the correct adjustments for the quadrant of the angle. Don't mess with this right now.
Here are the tests of position pos3 for Gene's function:
{ pass = true, args = { pos3, {x=-1.01, y=-2}, rect} },
{ pass = false, args = { pos3, {x=-0.99, y=-2}, rect } },
{ pass = false, args = { pos3, {x=-2.01, y=-1}, rect } },
{ pass = true, args = { pos3, {x=-1.99, y=-1}, rect } },
Instead of worrying about angles, and referring back to the last diagram, note that arrows from pos3 to the outside corners of the rectangle can be drawn by going 1 in the negative x-direction, and 2 in the negative y-direction, or by going 2 in the negative x-direction and 1 in the negative y-direction. So to test for slightly off-corner intersections, as before, the x-components of the trajectory direction vectors are modified. For example, from pos3, the trajectory {-2, -1} should intersect at the corner nearest the y-axis. So, the trajectory {-2.01, -1} should not intersect with this corner, and the trajectory {-1.99, -1} should intersect.
One caveat. Notice that there are four tests, and not six as before. Gene's function will report that there is no intersection from pos3 with the trajectory vector {-2, -1}. This is not incorrect, but just a difference in the way his code handles corner cases. In fact, I should probably remove the tests that directly test at the corners of the rectangle for my function, and just test near the corners, which is what really matters. Both functions report the same results for all other cases. The tiniest bit away from a corner, and both functions report false; the tiniest bit inside the corners, and both functions report true.
IMO atan is a bad idea. It's expensive, and the quadrant adjustment is way too hard.
You can simplify the accepted code by using atan2 rather than atan, but that's still too expensive.
You don't really care about the angle. You care about the signs of the angles between the direction vector and the extreme vectors from current position to box corners.
For this, cross-products are cheaper, simpler, and better-suited. If you have two vectors a and b, then the quantity:
s = a.x * b.y - a.y * b.x
is positive if the acute angle from a to b is positive (counter-clockwise) and vice versa.
You need only to verify that the acute angles formed by the direction vector and lines from the current point to each of the box corners include at least one positive and one negative result.
function is_intersect(pos, dir, quad)
local n_pos, n_neg = 0, 0
for i = 1, 4 do
local dx, dy = quad[i].x - pos.x, quad[i].y - pos.y
local s = dir.x * dy - dir.y * dx
if s > 0 then n_pos = n_pos + 1 end
if s < 0 then n_neg = n_neg + 1 end
end
return n_pos > 0 and n_neg > 0
end
Sorry I'm not a Lua programmer. This is a best guess at Lua syntax using the already-accepted answer as a pattern.
I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.
I have a lot of points on the surface of the sphere.
How can I calculate the area/spot of the sphere that has the largest point density?
I need this to be done very fast. If this was a square for example I guess I could create a grid and then let the points vote which part of the grid is the best.
I have tried with transforming the points to spherical coordinates and then do a grid, both this did not work well since points around north pole are close on the sphere but distant after the transform.
Thanks
There is in fact no real reason to partition the sphere into a regular non-overlapping mesh, try this:
partition your sphere into semi-overlapping circles
see here for generating uniformly distributed points (your circle centers)
Dispersing n points uniformly on a sphere
you can identify the points in each circle very fast by a simple dot product..it really doesn't matter if some points are double counted, the circle with the most points still represents the highest density
mathematica implementation
this takes 12 seconds to analyze 5000 points. (and took about 10 minutes to write )
testcircles = { RandomReal[ {0, 1}, {3}] // Normalize};
Do[While[ (test = RandomReal[ {-1, 1}, {3}] // Normalize ;
Select[testcircles , #.test > .9 & , 1] ) == {} ];
AppendTo[testcircles, test];, {2000}];
vmax = testcircles[[First#
Ordering[-Table[
Count[ (testcircles[[i]].#) & /# points , x_ /; x > .98 ] ,
{i, Length[testcircles]}], 1]]];
To add some other, alternative schemes to the mix: it's possible to define a number of (almost) regular grids on sphere-like geometries by refining an inscribed polyhedron.
The first option is called an icosahedral grid, which is a triangulation of the spherical surface. By joining the centres of the triangles about each vertex, you can also create a dual hexagonal grid based on the underlying triangulation:
Another option, if you dislike triangles (and/or hexagons) is the cubed-sphere grid, formed by subdividing the faces of an inscribed cube and projecting the result onto the spherical surface:
In either case, the important point is that the resulting grids are almost regular -- so to evaluate the region of highest density on the sphere you can simply perform a histogram-style analysis, counting the number of samples per grid cell.
As a number of commenters have pointed out, to account for the slight irregularity in the grid it's possible to normalise the histogram counts by dividing through by the area of each grid cell. The resulting density is then given as a "per unit area" measure. To calculate the area of each grid cell there are two options: (i) you could calculate the "flat" area of each cell, by assuming that the edges are straight lines -- such an approximation is probably pretty good when the grid is sufficiently dense, or (ii) you can calculate the "true" surface areas by evaluating the necessary surface integrals.
If you are interested in performing the requisite "point-in-cell" queries efficiently, one approach is to construct the grid as a quadtree -- starting with a coarse inscribed polyhedron and refining it's faces into a tree of sub-faces. To locate the enclosing cell you can simply traverse the tree from the root, which is typically an O(log(n)) operation.
You can get some additional information regarding these grid types here.
Treating points on a sphere as 3D points might not be so bad.
Try either:
Select k, do approximate k-NN search in 3D for each point in the data or selected point of interest, then weight the result by their distance to the query point. Complexity may vary for different approximate k-NN algorithms.
Build a space-partitioning data structure like k-d Tree, then do approximate (or exact) range counting query with a ball range centered at each point in the data or selected point of interest. Complexity is O(log(n) + epsilon^(-3)) or O(epsilon^(-3)*log(n)) for each approximate range query with state of the art algorithms, where epsilon is the range error threshold w.r.t. the size of the querying ball. For exact range query, the complexity is O(n^(2/3)) for each query.
Partition the sphere into equal-area regions (bounded by parallels and meridians) as described in my answer there and count the points in each region.
The aspect ratio of the regions will not be uniform (the equatorial regions will be more "squarish" when N~M, while the polar regions will be more elongated).
This is not a problem because the diameters of the regions go to 0 as N and M increase.
The computational simplicity of this method trumps the better uniformity of domains in the other excellent answers which contain beautiful pictures.
One simple modification would be to add two "polar cap" regions to the N*M regions described in the linked answer to improve the numeric stability (when the point is very close to a pole, its longitude is not well defined). This way the aspect ratio of the regions is bounded.
You can use the Peters projection, which preserves the areas.
This will allow you to efficiently count the points in a grid, but also in a sliding window (box Parzen window) by using the integral image trick.
If I understand correctly, you are trying to find the densepoint on sphere.
if points are denser at some point
Consider Cartesian coordinates and find the mean X,Y,Z of points
Find closest point to mean X,Y,Z that is on sphere (you may consider using spherical coordinates, just extend the radius to original radius).
Constraints
If distance between mean X,Y,Z and the center is less than r/2, then this algorithm may not work as desired.
I am not master of mathematics but may be it can solve by analytical way as:
1.Short the coordinate
2.R=(Σ(n=0. n=max)(Σ(m=0. M=n)(1/A^diff_in_consecative))*angle)/Σangle
A=may any constant
This is really just an inverse of this answer of mine
just invert the equations of equidistant sphere surface vertexes to surface cell index. Don't even try to visualize the cell different then circle or you go mad. But if someone actually do it then please post the result here (and let me now)
Now just create 2D cell map and do the density computation in O(N) (like histograms are done) similar to what Darren Engwirda propose in his answer
This is how the code looks like in C++
//---------------------------------------------------------------------------
const int na=16; // sphere slices
int nb[na]; // cells per slice
const int na2=na<<1;
int map[na][na2]; // surface cells
const double da=M_PI/double(na-1); // latitude angle step
double db[na]; // longitude angle step per slice
// sherical -> orthonormal
void abr2xyz(double &x,double &y,double &z,double a,double b,double R)
{
double r;
r=R*cos(a);
z=R*sin(a);
y=r*sin(b);
x=r*cos(b);
}
// sherical -> surface cell
void ab2ij(int &i,int &j,double a,double b)
{
i=double(((a+(0.5*M_PI))/da)+0.5);
if (i>=na) i=na-1;
if (i< 0) i=0;
j=double(( b /db[i])+0.5);
while (j< 0) j+=nb[i];
while (j>=nb[i]) j-=nb[i];
}
// sherical <- surface cell
void ij2ab(double &a,double &b,int i,int j)
{
if (i>=na) i=na-1;
if (i< 0) i=0;
a=-(0.5*M_PI)+(double(i)*da);
b= double(j)*db[i];
}
// init variables and clear map
void ij_init()
{
int i,j;
double a;
for (a=-0.5*M_PI,i=0;i<na;i++,a+=da)
{
nb[i]=ceil(2.0*M_PI*cos(a)/da); // compute actual circle cell count
if (nb[i]<=0) nb[i]=1;
db[i]=2.0*M_PI/double(nb[i]); // longitude angle step
if ((i==0)||(i==na-1)) { nb[i]=1; db[i]=1.0; }
for (j=0;j<na2;j++) map[i][j]=0; // clear cell map
}
}
//---------------------------------------------------------------------------
// this just draws circle from point x0,y0,z0 with normal nx,ny,nz and radius r
// need some vector stuff of mine so i did not copy the body here (it is not important)
void glCircle3D(double x0,double y0,double z0,double nx,double ny,double nz,double r,bool _fill);
//---------------------------------------------------------------------------
void analyse()
{
// n is number of points and r is just visual radius of sphere for rendering
int i,j,ii,jj,n=1000;
double x,y,z,a,b,c,cm=1.0/10.0,r=1.0;
// init
ij_init(); // init variables and map[][]
RandSeed=10; // just to have the same random points generated every frame (do not need to store them)
// generate draw and process some random surface points
for (i=0;i<n;i++)
{
a=M_PI*(Random()-0.5);
b=M_PI* Random()*2.0 ;
ab2ij(ii,jj,a,b); // cell corrds
abr2xyz(x,y,z,a,b,r); // 3D orthonormal coords
map[ii][jj]++; // update cell density
// this just draw the point (x,y,z) as line in OpenGL so you can ignore this
double w=1.1; // w-1.0 is rendered line size factor
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
// draw cell grid (color is function of density)
for (i=0;i<na;i++)
for (j=0;j<nb[i];j++)
{
ij2ab(a,b,i,j); abr2xyz(x,y,z,a,b,r);
c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2); glCircle3D(x,y,z,x,y,z,0.45*da,0); // outline
glColor3f(0.1,0.1,c ); glCircle3D(x,y,z,x,y,z,0.45*da,1); // filled by bluish color the more dense the cell the more bright it is
}
}
//---------------------------------------------------------------------------
The result looks like this:
so now just see what is in the map[][] array you can find the global/local min/max of density or whatever you need... Just do not forget that the size is map[na][nb[i]] where i is the first index in array. The grid size is controlled by na constant and cm is just density to color scale ...
[edit1] got the Quad grid which is far more accurate representation of used mapping
this is with na=16 the worst rounding errors are on poles. If you want to be precise then you can weight density by cell surface size. For all non pole cells it is simple quad. For poles its triangle fan (regular polygon)
This is the grid draw code:
// draw cell quad grid (color is function of density)
int i,j,ii,jj;
double x,y,z,a,b,c,cm=1.0/10.0,mm=0.49,r=1.0;
double dx=mm*da,dy;
for (i=1;i<na-1;i++) // ignore poles
for (j=0;j<nb[i];j++)
{
dy=mm*db[i];
ij2ab(a,b,i,j);
c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b-dy,r); glVertex3d(x,y,z);
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_QUADS);
abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b-dy,r); glVertex3d(x,y,z);
glEnd();
}
i=0; j=0; ii=i+1; dy=mm*db[ii];
ij2ab(a,b,i,j); c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z); }
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_TRIANGLE_FAN); abr2xyz(x,y,z,a ,b ,r); glVertex3d(x,y,z);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z); }
glEnd();
i=na-1; j=0; ii=i-1; dy=mm*db[ii];
ij2ab(a,b,i,j); c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z); }
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_TRIANGLE_FAN); abr2xyz(x,y,z,a ,b ,r); glVertex3d(x,y,z);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z); }
glEnd();
the mm is the grid cell size mm=0.5 is full cell size , less creates a space between cells
If you want a radial region of the greatest density, this is the robust disk covering problem with k = 1 and dist(a, b) = great circle distance (a, b) (see https://en.wikipedia.org/wiki/Great-circle_distance)
https://www4.comp.polyu.edu.hk/~csbxiao/paper/2003%20and%20before/PDCS2003.pdf
Consider using a geographic method to solve this. GIS tools, geography data types in SQL, etc. all handle curvature of a spheroid. You might have to find a coordinate system that uses a pure sphere instead of an earthlike spheroid if you are not actually modelling something on Earth.
For speed, if you have large numbers of points and want the densest location of them, a raster heatmap type solution might work well. You could create low resolution rasters, then zoom to areas of high density and create higher resolution only cells that you care about.