Given a group of 3D models spatially arranged in a specific formation, how do I scale them while preserving the relative distances between each other?
Case in point: I have 10 meshes. Six of them are arranged to form a closed square room. The remaining 4 are pieces of furniture placed at appropriate locations inside it. All meshes have a scale of 1.0. I wish to increase it to, say 2.0.
I am not a mathematician, so I'm going to use the most basic terminology I know how to explain the procedure. You may even find the simplicity of the terminology I use easier to understand than mathematic "jargon"
You need to use the nominal centre points of all objects in the formation to determine the exact Formation Centre (this will be, of course, a 3D Vector consisting of an X, Y and Z value)...
Object Total = The total number of objects within your "formation"
Cycle through all objects in your formation
For each object (to calculate Axis Total)
Add the X co-ordinates together (gives us Axis Total X)
Add the Y co-ordinates together (gives us Axis Total Y)
Add the Z co-ordinates together (gives us Axis Total Z)
For each Axis Total axis (to calculate Formation Centre)
Formation Centre X = Axis Total X divided by Object Total
Formation Centre Y = Axis Total Y divided by Object Total
Formation Centre Z = Axis Total Z divided by Object Total
The three values you now have constitute the Formation Centre (as a 3D vector).
NOTE: If you are arranging your objects based on a pre-defined fixed point in 3D space (0, 0, 0 for example) you don't need to do the above calculation, as your Formation Centre will be that fixed point.
for each object
Calculate the Distance of each axis (Distance X, Distance Y and Distance Z) of the Object Centre from the according axis of Formation Centre...
Distance X = Formation Centre X - Object Position X
Distance Y = Formation Centre Y - Object Position Y
Distance Z = Formation Centre Z - Object Position Z
Scale the object by your desired Scale Factor
Set the X, Y and Z Position values to their current value plus the distance value of the same axis multiplied by the scale...
Position X = Position X + (Distance X * Scale Factor)
Position Y = Position Y + (Distance Y * Scale Factor)
Position Z = Position Z + (Distance Z * Scale Factor)
If you've done this correctly, your objects have now been scaled, still retain their formation, but have moved relative to the Formation Centre and Scale Factor. Simply put: occlusion between these objects can no longer occur as their Positions have scaled along with their Dimensions.
To really answer this, we still need a bit more information about the format of your data and how you're applying transformations. But here's a guess:
Your objects are most likely represented as collection of polygons which are themselves represented as a collection of points relative to some 'root point' such as the center of the object or a bottom corner. When you place the object somewhere, like a room, you can do so by applying a sequence of matrix multiplications to the points that represent the object. A single matrix multiply can usually do the whole transformation, but it makes more sense to us if we compose a sequence of transformations that do intuitive things. For example, you would usually
Scale the object to be the size you want.
Rotate the object to be oriented the way you want.
Translate the object to be where you want.
All of these transformations happen relative to the 'root point' of the object and their order makes a big difference. If you translate and then scale or rotate, the scale and rotate will happen relative to the newly translated center.
So, if you have placed objects in a room, and [0,0,0] of your coordinate system happens to be in the center of the room, if you scale all of the transformed points of those objects, they will all grow/shrink and push outward/inward from [0,0,0]. Since that's not what you want, you must first change the origin by translating the object, then scale, then change the origin back to where it was.
If I have two points: [3,0,0] and [4,0,0] and I want to scale them so that the distance between them is 2 instead of 1, if I just multiply (scale) by 2, I get [6,0,0] and [8,0,0]. There's a distance of 2 between them now, but they both moved. If I want the first point to stay put, then I first translate by [-3,0,0], then I scale by 2, then I translate by [3,0,0] and I have what I wanted. If, instead, I want the center of those two points to remain the same, then I translate by [(+/-)3.5,0,0].
It falls on you to decide which points of the objects should not move. Then you move that point to the origin before scaling. Then you move it back afterward. Since you don't want your objects to push through the floor, you'll probably pick a point on the ground (or whatever surface they're attached to). If you have one object resting on another (books on a desk) then those objects should probably use the same reference point.
Related
Given a 3D object, how do I convert it into an approximated shape in which all the sides of the object are parallel to either of the co-ordinate planes, and all the vertices have integer co-ordinates?
For example, a sphere with center at origin and a radius of 1.5 will be approximated to a cube with center at origin and side length of 2.
For another example, the line given by x = y = 0.5 will have an approximated shape as a rectangular parallelepiped with infinite length, and width and breadth as 1, and positioned such that one of its edge is along z-axis, while all the faces are along or parallel to either of x-z or y-z co-ordinate planes.
I am working with finite objects only, the above example is only meant to explain my needs.
I want an algorithm which can do this for me for any shape.
In general case you need to determine maximum and minimum shape coordinates along every axis and define minimum axis aligned integer bounding box with values rounded to larger (using Ceil) for max and rounded to smaller (using Floor) for min coordinates. For example:
XMin_Box = Floor(XMin_Shape)
XMax_Box = Ceil(XMax_Shape)
Edit:
If you need to approximate a shape with more precision, consider some kind of voxelization (3d analog of 2d rasterization)
I'm trying to calculate the axis of rotation of a ball which is moving and spinning at the same time, i.e. I want the vector along the axis that the ball is spinning on.
For every frame I know the x, y and z locations of 3 specific points on the surface of the sphere. I assume that by looking at how these 3 points have moved in successive frames, you can calculate the axis of rotation of the ball, however I have very little experience with this kind of maths, any help would be appreciated!
You could use the fact that the direction a position vector moves in will always be perpendicular to the axis of rotation. Therefore, if you have two position vectors v1 and v2 at successive times (for the same point), use
This gives you an equation with three unknowns (the components of w, the rotation axis). If you then plug in all three points you have knowledge of you should be able to solve these simultaneous equations and work out w.
Could you explain to me what is the purpose of -1 in the last row of the gl_projection matrix? And how it affects the perspective division step ?
The essential property of a perspective projection is that you divide the x/y coordinates by the depth (distance from viewer). This makes objects closer to the viewer (which have smaller depth values) larger, and objects farther from the viewer (which have larger depth values) smaller.
The next piece of the puzzle is how homogenous coordinates work. The (x, y, z, w) coordinates in homogenous space produced by the vertex shader are converted to regular 3D coordinates by dividing them by w:
(x, y, z, w) --> (x/w, y/w, z/w, 1)
So we want a division by the depth to achieve a perspective, and we know that the coordinates produced by the vertex shader will be divided by w. To get the desired result, we can simply put the depth value in eye coordinate space into the w coordinate.
This is exactly what the last row of the projection matrix does. The dot product of the last row with the input vector (which are the eye space coordinates of the vertex) produces the w value of the output:
(0 0 -1 0) * (x y z 1) = -z
You might have expected the value of the matrix element to be 1, to simply copy the z value in eye space to the w value of the vertex shader output. The reason we use -1 to invert the sign is based on the common arrangement of eye space coordinates in OpenGL.
Eye coordinates in OpenGL typically have the "camera" at the origin, looking down the negative z-axis. So the visible range of z-coordinates has negative values. Since we want the distance from the viewer in the resulting w-coordinate, we flip the sign of the eye space z-coordinate, which turns the negative z-values into positive distance values from the origin.
Note that much of this is just common policy, partly rooted in the legacy fixed function pipeline. With the programmable pipeline used in current OpenGL versions, you have complete freedom in how you organize your coordinate spaces and transformations. For example, you could easily use an eye space coordinate system where the camera points in the positive z-direction, and then have a 1 in the last row of the projection matrix instead of a -1.
As stated here: http://www.songho.ca/opengl/gl_projectionmatrix.html
"Therefore, we can set the w-component of the clip coordinates as -ze. And, the 4th of GL_PROJECTION matrix becomes (0, 0, -1, 0)."
I have a circle I want to divide up in to a number of segments all defined by X and Y coordinates. How to I test to see if a point (X, Y) is in a particular segment?
A code example would be preferable.
You don't need to use trigonometry for this (and in general, trigonometry should be avoided whenever possible... it leads to too many precision, domain, and around-the-corner problems).
To determine whether a point P is counter-clockwise of another point A (in the sense of being in the half-plane defined by the left side of a directed line going through the origin and then through A), you can examine the sign of the result of Ax*Py - Ay*Px. This is generally known as the "perpendicular dot product", and is the same as the Z coordinate of the 3D cross product.
If there are two points A and B (with B defining the CCW-most extent) defining a sector, and the sector is less than half the circle, any point which is CCW of A and CW of B can be classified as in that sector.
That leaves only a sector which is more than half of the circle. Obviously, a given set of points can only define at most one such sector. There's clever things you can do with angle bisection, but the easiest approach is probably just to classify points as in that sector if you can't classify them as being in any other sector.
Oh, forgot to mention -- determining the order of the points for the purposes of pairing them up for sectors. Not to go against my previous advice, but the most straightforward thing here is just to sort them by their atan2 (not atan... never ever use atan).
Use the polar coordinate system centred at the centre of the circle, and examine the angular coordinate (φ in the Wikipedia article).
What exactly you do with φ depends on how your segments are defined. For example, if you have n equal segments that start at 0 radians, floor(φ * n / (2 * π)) will give you the segment number.
Your segment is defined by two intersections between the circle and a line. You just have to know if:
The angle between the center of your circle and your point is between
the angles formed by the two previous points and the center.
the point is in the circle (the length from this point to the center is smaller than the radius)
from what side is the point compared to the line (it must be beyond the line).
Remark
In geometry, a circular segment (symbol: ⌓) is a region of a circle
which is "cut off" from the rest of the circle by a secant or a chord.
Here is a segment:
If x & y are not already relative to the center of the circle, subtract the coordinates of the center of the circle:
x -= circle.x
y -= circle.y
Use atan2 to get the angle of the point about the origin of the circle:
angle = atan2(y, x)
This angle is negative for points below the x-axis, so adjust to always be positive:
if (angle < 0) angle += 2 * pi
Assuming your segments are equally spaced, use this formula to get the index of the segment:
segment = floor((angle * numSegments) / (2 * pi))
If you find the result is referring to a segment on the opposite side of the circle to what you want, you might have to do y = -y in the beginning or a segment = (numSegments - 1) - segment at the end to flip it round the right way, but it should basically work.
I have a stack of images (about 180 of them) and there are 2 stars (just basic annotations) on every single image. Hence, the position (x,y) of the two stars are provided initially. The dimensions of all these images are fixed and constant.
The 'distance' between the image is about 1o with the origin to be the center (width/2, height/2) of every single 2D image. Note that, if this is plotted out and interpolated nicely, the stars would actually form a ring of an irregular shape.
The dotted red circle and dotted purple circle are there to give a stronger scent of a 3D space and the arrangement of the 2D images (like a fan). It also indicates that each slice is about 1o apart.
With the provided (x,y) that appeared in the 2D image, how do you get the corresponding (x,y,z) in the 3d space knowing that each image is about 1o apart?
I know that MATLAB had 3D plotting capabilities, how should I go about implementing the solution to the above scenario? (Unfortunately, I have very little experience plotting 3D with MATLAB)
SOLUTION
Based on the accepted answer, I looked up a bit further: spherical coordinate system. Based on the computation of phi, rho and theta, I could reconstruct the ring without problems. Hopefully this helps anyone with similar problems.
I have also documented the solution here. I hope it helps someone out there, too:
http://gray-suit.blogspot.com/2011/07/spherical-coordinate-system.html
I believe the y coordinate stays as is for 3D, so we can treat this as converting 2D x and image angle to an x and z when viewed top down.
The 2D x coordinate is the distance from the origin in 3D space (viewed top down). The image angle is the angle the point makes with respect to the x axis in 3D space (viewed top down). So the x coordinate (distance from orign) and the image angle (angle viewed top down) makes up the x and z coordinates in 3D space (or x and y if viewed top down).
That is a polar coordinate.
Read how to convert from polar to cartesian coordinates to get your 3D x and z coordinates.
I'm not great at maths either, here's my go:
3D coords = (2Dx * cos(imageangle), 2Dy, 2Dx * sin(imageangle))
Given the 2D coordinates (x,y) just add the angle A as a third coordinate: (x,y,A). Then you have 3D.
If you want to have the Anotations move on a circle of radius r in 3D you can just calculate:
you can use (r*cos(phi),r*sin(phi),0) which draws a circle in the XY-plane and rotate it with a 3x3 rotation matrix into the orientation you need.
It is not clear from you question around which axis your rotation is taking place. However, my answer holds for a general rotation axis.
First, put your points in a 3D space, lying on the X-Y plane. This means the points have a 0 z-coordinate. Then, apply a 3D rotation of the desired angle around the desired axis - in your example, it is a one degree rotation. You could calculate the transformation matrix yourself (should not be too hard, google "3D rotation matrix" or similar keywords). However, MATLAB makes it easier, using the viewmtx function, which gives you a 4x4 rotational matrix. The extra (fourth) dimension is dependent on the projection you specify (it acts like a scaling coefficient), but in order to make things simple, I will let MATLAB use its default projection - you can read about it in MATLAB documentation.
So, to make the plot clearer, I assume four points which are the vertices of a square lying on the x-y plane (A(1,1), B(1,-1), C(-1,-1), D(1,-1)).
az = 0; % Angle (degrees) of rotation around the z axis, measured from -y axis.
el = 90; % Angle (degrees) of rotation around the y' axis (the ' indicates axes after the first rotation).
x = [1,-1, -1, 1,1]; y = [1, 1, -1, -1,1]; z = [0,0, 0, 0,0]; % A square lying on the X-Y plane.
[m,n] = size(x);
x4d = [x(:),y(:),z(:),ones(m*n,1)]'; % The 4D version of the points.
figure
for el = 90 : -1 :0 % Start from 90 for viewing directly above the X-Y plane.
T = viewmtx(az, el);
x2d = T * x4d; % Rotated version of points.
plot3 (x2d(1,:), x2d(2,:),x2d(3,:),'-*'); % Plot the rotated points in 3D space.
grid
xlim ([-2,2]);
ylim ([-2,2]);
zlim([-2,2]);
pause(0.1)
end
If you can describe your observation of a real physical system (like a binary star system) with a model, you can use particle filters.
Those filters were developed to locate a ship on the sea, when only one observation direction was available. One tracks the ship and estimates where it is and how fast it moves, the longer one follows, the better the estimates become.