3D Rotation Matrix deforms over time in Processing/Java - matrix

Im working on a project where i want to generate a 3D mesh to represent a certain amount of data.
To create this mesh i want to use transformation Matrixes, so i created a class based upon the mathematical algorithms found on a couple of websites.
Everything seems to work, scale/translation but as soon as im rotating a mesh on its x-axis its starts to deform after 2 to 3 complete rotations. It feels like my scale values are increasing which transforms my mesh. I'm struggling with this problem for a couple of days but i can't figure out what's going wrong.
To make things more clear you can download my complete setup here.
I defined the coordinates of a box and put them through the transformation matrix before writing them to the screen
This is the formula for rotating my object
void appendRotation(float inXAngle, float inYAngle, float inZAngle, PVector inPivot ) {
boolean setPivot = false;
if (inPivot.x != 0 || inPivot.y != 0 || inPivot.z != 0) {
setPivot = true;
}
// If a setPivot = true, translate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,inPivot.z); }
}
// Create a rotationmatrix
Matrix3D rotationMatrix = new Matrix3D();
// xsin en xcos
float xSinCal = sin(radians(inXAngle));
float xCosCal = cos(radians(inXAngle));
// ysin en ycos
float ySinCal = sin(radians(inYAngle));
float yCosCal = cos(radians(inYAngle));
// zsin en zcos
float zSinCal = sin(radians(inZAngle));
float zCosCal = cos(radians(inZAngle));
// Rotate around x
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[1][1] = xCosCal;
rotationMatrix.matrix[1][2] = xSinCal;
rotationMatrix.matrix[2][1] = -xSinCal;
rotationMatrix.matrix[2][2] = xCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around y
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = yCosCal;
rotationMatrix.matrix[0][2] = -ySinCal;
rotationMatrix.matrix[2][0] = ySinCal;
rotationMatrix.matrix[2][2] = yCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around z
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = zCosCal;
rotationMatrix.matrix[0][1] = zSinCal;
rotationMatrix.matrix[1][0] = -zSinCal;
rotationMatrix.matrix[1][1] = zCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Untranslate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(-inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,-inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,-inPivot.z); }
}
}
Does anyone have a clue?

You never want to cumulatively transform matrices. This will introduce error into your matrices and cause problems such as scaling or skewing the orthographic components.
The correct method would be to keep track of the cumulative pitch, yaw, roll angles. Then reconstruct the transformation matrix from those angles every update.

If there is any chance: avoid multiplying rotation matrices. Keep track of the cumulative rotation and compute a new rotation matrix at each step.
If it is impossible to avoid multiplying the rotation matrices then renormalize them (starts on page 16). It works for me just fine for more than 10 thousand multiplications.
However, I suspect that it will not help you, numerical errors usually requires more than 2 steps to manifest themselves. It seems to me the reason for your problem is somewhere else.
Yaw, pitch and roll are not good for arbitrary rotations. Euler angles suffer from singularities and instability. Look at 38:25 (presentation of David Sachs)
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
Good luck!

As #don mentions, try to avoid cumulative transformations, as you can run into all sort of problems. Rotating by one axis at a time might lead you to Gimbal Lock issues. Try to do all rotations in one go.
Also, bare in mind that Processing comes with it's own Matrix3D class called PMatrix3D which has a rotate() method which takes an angle(in radians) and x,y,z values for the rotation axis.
Here is an example function that would rotate a bunch of PVectors:
PVector[] rotateVerts(PVector[] verts,float angle,PVector axis){
int vl = verts.length;
PVector[] clone = new PVector[vl];
for(int i = 0; i<vl;i++) clone[i] = verts[i].get();
//rotate using a matrix
PMatrix3D rMat = new PMatrix3D();
rMat.rotate(angle,axis.x,axis.y,axis.z);
PVector[] dst = new PVector[vl];
for(int i = 0; i<vl;i++) {
dst[i] = new PVector();
rMat.mult(clone[i],dst[i]);
}
return dst;
}
and here is an example using it.
HTH

A shot in the dark: I don't know the rules or the name of the programming language you are using, but this procedure looks suspicious:
void setIdentity() {
this.matrix = identityMatrix;
}
Are you sure your are taking a copy of identityMatrix? If it is just a reference you are copying, then identityMatrix will be modified by later operations, and soon nothing makes sense.

Though the matrix renormalization suggested probably works fine in practice, it is a bit ad-hoc from a mathematical point of view. A better way of doing it is to represent the cumulative rotations using quaternions, which are only converted to a rotation matrix upon application. The quaternions will also drift slowly from orthogonality (though slower), but the important thing is that they have a well-defined renormalization.
Good starting information for implementing this can be:
http://www.cprogramming.com/tutorial/3d/quaternions.html
http://www.scheib.net/school/library/quaternions.pdf
A useful academic reference can be:
K. Shoemake, “Animating rotation with quaternion curves,” ACM
SIGGRAPH Comput. Graph., vol. 19, no. 3, pp. 245–254, 1985. DOI:10.1145/325165.325242

Related

How to "move" or "traverse" the hyperbolic tessellation in MagicTile?

Alright I think I've mostly figured out how the MagicTile works, the source code at least (not really the Math as much yet). It all begins with the build and render calls in the MainForm.cs. It generates a tessellation like this:
First, it "generates" the tessellation. Since MagicTile is a Rubic's cube-like game, I guess it just statically computes all of the tiles up front. It does this by starting with a central tile, and reflecting its polygon (and the polygon's segments and points) using some sort of math which I've read about several times but I couldn't explain. Then it appears they allow rotations of the tessellation, where they call code like this in the "renderer":
Polygon p = sticker.Poly.Clone();
p.Transform( m_mouseMotion.Isometry );
Color color = GetStickerColor( sticker );
GLUtils.DrawConcavePolygon( p, color, GrabModelTransform() );
They track the mouse position, like if you are dragging, and somehow that is used to create an "isometry" to augment / transform the overall tessellation. So then we transform the polygon using that isometry. _It appears they only do the central tile and 1 or 2 levels after that, but I can't quite tell, I haven't gotten the app to run and debug yet (it's also in C# and that's a new language for me, coming from TypeScript). The Transform function digs down like this (here it is in TypeScript, as I've been converting it):
TransformIsometry(isometry: Isometry) {
for (let s of this.Segments) {
s.TransformIsometry(isometry)
}
this.Center = isometry.Apply(this.Center)
}
That goes into the transform for the segments here:
/// <summary>
/// Apply a transform to us.
/// </summary>
TransformInternal<T extends ITransform>(transform: T) {
// NOTES:
// Arcs can go to lines, and lines to arcs.
// Rotations may reverse arc directions as well.
// Arc centers can't be transformed directly.
// NOTE: We must calc this before altering the endpoints.
let mid: Vector3D = this.Midpoint
if (UtilsInfinity.IsInfiniteVector3D(mid)) {
mid = this.P2.MultiplyWithNumber(UtilsInfinity.FiniteScale)
}
mid = UtilsInfinity.IsInfiniteVector3D(this.P1)
? this.P2.MultiplyWithNumber(UtilsInfinity.FiniteScale)
: this.P1.MultiplyWithNumber(UtilsInfinity.FiniteScale)
this.P1 = transform.ApplyVector3D(this.P1)
this.P2 = transform.ApplyVector3D(this.P2)
mid = transform.ApplyVector3D(mid)
// Can we make a circle out of the transformed points?
let temp: Circle = new Circle()
if (
!UtilsInfinity.IsInfiniteVector3D(this.P1) &&
!UtilsInfinity.IsInfiniteVector3D(this.P2) &&
!UtilsInfinity.IsInfiniteVector3D(mid) &&
temp.From3Points(this.P1, mid, this.P2)
) {
this.Type = SegmentType.Arc
this.Center = temp.Center
// Work out the orientation of the arc.
let t1: Vector3D = this.P1.Subtract(this.Center)
let t2: Vector3D = mid.Subtract(this.Center)
let t3: Vector3D = this.P2.Subtract(this.Center)
let a1: number = Euclidean2D.AngleToCounterClock(t2, t1)
let a2: number = Euclidean2D.AngleToCounterClock(t3, t1)
this.Clockwise = a2 > a1
} else {
// The circle construction fails if the points
// are colinear (if the arc has been transformed into a line).
this.Type = SegmentType.Line
// XXX - need to do something about this.
// Turn into 2 segments?
// if( UtilsInfinity.IsInfiniteVector3D( mid ) )
// Actually the check should just be whether mid is between p1 and p2.
}
}
So as far as I can tell, this will adjust the segments based on the mouse position, somehow. Mouse position isometry updating code is here.
So it appears they don't have the functionality to "move" the tiling, like if you were walking on it, like in HyperRogue.
So after having studied this code for a few days, I am not sure how to move or walk along the tiles, moving the outer tiles toward the center, like you're a giant walking on Earth.
First small question, can you do this with MagicTile? Can you somehow update the tessellation to move a different tile to the center? (And have a function which I could plug a tween/animation into so it animates there). Or do I need to write some custom new code? If so, what do I need to do roughly speaking, maybe some pseudocode?
What I imagine is, user clicks on the outer part of the tessellation. We convert that click data to the tile index in the tessellation, then basically want to do tiling.moveToCenter(tile), but frame-by-frame-animation, so not quite sure how that would work. But that moveToCenter, what would that do in terms of the MagicTile rendering/tile-generating code?
As I described in the beginning, it first generates the full tessellation, then only updates 1-3 layers of the tiles for their puzzles. So it's like I need to first shift the frame of reference, and recompute all the potential visible tiles, somehow not recreating the ones that were already created. I don't quite see how that would work, do you? Once the tiles are recomputed, then I just re-render and it should show the updated center.
Is it a simple matter of calling some code like this again, for each tile, where the isometry is somehow updated with a border-ish position on the tessellation?
Polygon p = sticker.Poly.Clone();
p.Transform( m_mouseMotion.Isometry );
Or must I do something else? I can't quite see the full picture yet.
Or is that what these 3 functions are doing! TypeScript port of the C# MagicTile:
// Move from a point p1 -> p2 along a geodesic.
// Also somewhat from Don.
Geodesic(g: Geometry, p1: Complex, p2: Complex) {
let t: Mobius = Mobius.construct()
t.Isometry(g, 0, p1.Negate())
let p2t: Complex = t.ApplyComplex(p2)
let m2: Mobius = Mobius.construct()
let m1: Mobius = Mobius.construct()
m1.Isometry(g, 0, p1.Negate())
m2.Isometry(g, 0, p2t)
let m3: Mobius = m1.Inverse()
this.Merge(m3.Multiply(m2.Multiply(m1)))
}
Hyperbolic(g: Geometry, fixedPlus: Complex, scale: number) {
// To the origin.
let m1: Mobius = Mobius.construct()
m1.Isometry(g, 0, fixedPlus.Negate())
// Scale.
let m2: Mobius = Mobius.construct()
m2.A = new Complex(scale, 0)
m2.C = new Complex(0, 0)
m2.B = new Complex(0, 0)
m2.D = new Complex(1, 0)
// Back.
// Mobius m3 = m1.Inverse(); // Doesn't work well if fixedPlus is on disk boundary.
let m3: Mobius = Mobius.construct()
m3.Isometry(g, 0, fixedPlus)
// Compose them (multiply in reverse order).
this.Merge(m3.Multiply(m2.Multiply(m1)))
}
// Allow a hyperbolic transformation using an absolute offset.
// offset is specified in the respective geometry.
Hyperbolic2(
g: Geometry,
fixedPlus: Complex,
point: Complex,
offset: number,
) {
// To the origin.
let m: Mobius = Mobius.construct()
m.Isometry(g, 0, fixedPlus.Negate())
let eRadius: number = m.ApplyComplex(point).Magnitude
let scale: number = 1
switch (g) {
case Geometry.Spherical:
let sRadius: number = Spherical2D.e2sNorm(eRadius)
sRadius = sRadius + offset
scale = Spherical2D.s2eNorm(sRadius) / eRadius
break
case Geometry.Euclidean:
scale = (eRadius + offset) / eRadius
break
case Geometry.Hyperbolic:
let hRadius: number = DonHatch.e2hNorm(eRadius)
hRadius = hRadius + offset
scale = DonHatch.h2eNorm(hRadius) / eRadius
break
default:
break
}
this.Hyperbolic(g, fixedPlus, scale)
}

Oriented projectiles keep facing camera

I'm trying to render a 2d image that represent a projectile in a 3d world and i have difficulty to make the projectile face the camera without changing its direction. Im using JOML math library.
my working code to orient the projectile in his direction
public Quaternionf findRotation(Vector3f objectRay, Vector3f targetRay) {
Vector3f oppositeVector = new Vector3f(-objectRay.x, -objectRay.y, -objectRay.z);
// cas vecteur opposé
if(oppositeVector.x == targetRay.x && oppositeVector.y == targetRay.y && oppositeVector.z == targetRay.z) {
AxisAngle4f axis = new AxisAngle4f((float) Math.toRadians(180), 0, 0, 1);
Quaternionf result = new Quaternionf(axis);
return result;
}
objectRay = objectRay.normalize();
targetRay = targetRay.normalize();
double angleDif = Math.acos(new Vector3f(targetRay).dot(objectRay));
if (angleDif!=0) {
Vector3f orthoRay = new Vector3f(objectRay).cross(targetRay);
orthoRay = orthoRay.normalize();
AxisAngle4f deltaQ = new AxisAngle4f((float) angleDif, orthoRay.x, orthoRay.y, orthoRay.z);
Quaternionf result = new Quaternionf(deltaQ);
return result.normalize();
}
return new Quaternionf();
}
Now i want to add a vector3f cameraPosition parameter to rotate the projectile only on its x axis to face the camera but i dont know how to do it.
For example with this code the projectile correctly rotate around his x axis but not face the camera so i want to know how to find the correct angle.
this.lasers[i].getModel().rotate((float) Math.toRadians(5), 1, 0, 0);
I tried this to rotate around axis X with transforming vector before compute angle.
this.lasers[i] = new VisualEffect(this.position, new Vector3f(1,1,1), color, new Vector2f(0,0.33f));
this.lasers[i].setModel(new Matrix4f().scale(this.lasers[i].getScale()));
this.lasers[i].getModel().rotate(rotation);
this.lasers[i].getModel().translateLocal(this.lasers[i].getPosition());
Vector3f vec = new Vector3f(cameraPosition).sub(this.position);
Vector4f vecSpaceModel = this.lasers[i].getModel().transform(new Vector4f(vec, 1.0f));
Vector4f normalSpaceModel = this.lasers[i].getModel().transform(new Vector4f(normal, 1.0f));
float angleX = new Vector2f(vecSpaceModel.y, vecSpaceModel.z).angle(new Vector2f(normalSpaceModel.y, normalSpaceModel.z));
this.lasers[i].getModel().rotate(angleX, 1, 0, 0);
Since you are using JOML, you can massively simplify your whole setup.
Let's assume that:
projectilePosition is the position of the projectile,
targetPosition is the position the projectile is flying at/towards, and
cameraPosition is the position of the "camera" (which we ultimately want the projectile to face)
We will also assume that the local coordinate system of the projectile is such that its +X axis points along the projectile's path (like how you depicted it) and the +Z axis points away from the projectile towards the viewer when the viewer is "facing" the projectile. So, the projectile itself is defined as a quad on the XY plane within its own local coordinate system.
What we must do now is create a basis transformation that will effectively transform the projectile such that its X axis points towards the "target" and its Z axis points "as best as we can" towards the camera.
This is very reminiscent of what we know as the "lookAt" transformation in OpenGL. And in fact, we are just going to use that. However, since the common "lookAt" is the inverse of what we wanted to do, we will also just invert it.
So, all in all, your complete model matrix/transformation for a single projectile will look like this (in JOML):
Vector3f projectilePosition = ...;
Vector3f cameraPosition = ...;
Vector3f targetPosition = ...;
Vector3f projectileToCamera = new Vector3f(cameraPosition).sub(projectilePosition);
modelMatrix
.setLookAt(projectilePosition, targetPosition, projectileToCamera)
.invert()
.rotateXYZ((float) Math.toRadians(-90), 0, (float) Math.toRadians(90));
In case you do not want to use lookAt and invert, you can also do:
Vector3f projectileToTarget = new Vector3f(targetPosition).sub(projectilePosition);
modelMatrix
.translation(projectilePosition)
.rotateTowards(projectileToTarget, projectileToCamera)
.rotateXYZ((float) Math.toRadians(-90), 0, (float) Math.toRadians(-90));
yielding the same result as the above code.
Note that nowhere do we actually need angles or trigonometric functions. This is very common when you already have all positions/directions given as vectors, you can simply use linear algebra without converting from/to angles.
The last part with the rotateXYZ(90°, 0°, 90°) is to express that we do not want the -Z axis of the projectile to point towards the target (which is what lookAt will do by default), but we want the X axis to point to the target.
Yet another way is to realize that what we do here is also known as a "cylindrical" or "axial" billboard, and can also be expressed like so:
Vector3f projectileToTarget = new Vector3f(targetPosition).sub(quadPosition).normalize();
modelMatrix
.billboardCylindrical(projectilePosition, cameraPosition, projectileToTarget)
.rotateZ((float) Math.toRadians(90));
(Note that in this case projectileToTarget needs to be unit!)
A test with a simple scene containing 24 projectiles all targeting "the center" with the camera hovering over them will look like this:
The corresponding simple LWJGL 3 / JOML demo generating this image.

OpenGL ES: Handle large amount matrixdata improve performance

I am using instancing in my OpenGL-app and since only one drawcall are made I have to calculate a larger matrix that consists of smaller matrices and that larger matrix is sent to the shader where gl_InstanceID can distinguish between successive matrices.
Its put on the bus with the following call
GLES30.glUniformMatrix4fv(mMVPMatrixHandleBall, nBalls, false, mMVPMatrixMajor, 0);
and in the shader the multiplication si made by
gl_Position = u_MVPMatrix[gl_InstanceID] * a_Position;
simple!
On the client-side the larger matrix is created by the following code:
private void setLargeMVPmatrix() {
int cnt = 0;
for (Iterator<Ball> shapeIterator = arrayListBalls.iterator(); shapeIterator.hasNext(); ) {
Ball ball = shapeIterator.next();
mModelMatrix = ball.getmModelMatrix();
//multipl.
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
//subst. in matrisdata i en större vektor dvs vi får en stor matris som innehhåller flera mindre matriser
for (int i = 0; i < CreateGLContext.MATRIX_SIZE; i++) {
mMVPMatrixMajor[i + CreateGLContext.MATRIX_SIZE * cnt] = mMVPMatrix[i];
}
cnt++;
}
}
If I have moving-objects on the screen, like bouncing balls, for instance 100 balls bouncing around it means I have to continously translate their positions each frame which in turn means I have to call this method every frame. And the consequence is that it becomes a real performance bottelneck. I know it by just commenting out the method just to see what happends - and a real performance boost but the balls doesnt not move any longer, of course.
So my question - Is there a soluition to this problem? If I use instancing, I have to send a large matrix according to above.
Edit:
I've even tried the following which I thought could at least partially solve my problem. In the drawMethod:
int cnt = 0;
for (Iterator <Ball> it = arrayListBalls.iterator(); it.hasNext();) {
Ball ball = it.next();
mModelMatrix = ball.getmModelMatrix();
//multipl.
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES30.glUniformMatrix4fv( (mMVPMatrixHandleBall + cnt), 1, false, mMVPMatrix, 0);
cnt++;
}
Thanks in advance!!!
If the data that change are positions and rotations then that's what you should update to the shader.
Doing most of matrix stuff at CPU is slow, unless the needed operations are tiny, like computing the new view and projection matrices, same for all objects, and they are cheap to pass as uniforms
For every frame I'd re-fill a BufferData, perhaps with the help of glMapBufferRange or glBufferSubData, with the new positions and rotations.
Then, in the shader, build the matrices needed and do matrices multiplication there.
If initial positions and rotations are needed to build new matrices, then you must also pass them in another buffer, although just update it for the first frame.
With the proper attributes order you read in the shader these positions and rotations. The gl_InstanceID is then not needed for gl_Position calculus, perhaps needed for other object property.
If you need help on how to build matrices inside the shaders, look for glRotate and glTranslate in OpenGL 2.1 docs where you can find the definitions.
Also note that passing a big matrix for all objects by an uniform may exceed the limit on the size for the whole uniform data.

Camera speed dependent on 3D scene extents

I'm working on a 3D scene viewer with the HOOPS Engine
I want to implement support for a 3D mouse. Everything is working fine but there is one remaining problem I don't know how to solve:
I move my camera with following formula:
HC_Dolly_Camera(-(factor.x * this->m_speed), factor.y * this->m_speed, factor.z * this->m_speed);
this->m_speed is dependent on scene extents. But if the scene is really big (e.g. a airport) the camera speed is on a deep zoom level ridiculous fast.
My first attempt was to implement a kind of damping factor which is dependent on the distance from objects to my camera. It works ... somehow. Sometimes I noticed ugly "bouncing effects" which I can avoid with smooth acceleration and a modified cosine function.
But my question is: Is there a best practice to reduce camera speed in closeup situations in a 3D scene? My approach is working, but I think it is not a good solution due it uses many raycasts.
Best regards,
peekaboo777
P.S.:
My code
if(!this->smooth_damping)
{
if(int res = HC_Compute_Selection_By_Area(this->view->GetDriverPath(), ".", "v", -0.5, 0.5, -0.5, 0.5) > 0)
{
float window_x, window_y, window_z, camera_x, camera_y, camera_z;
double dist_length = 0;
double shortest_dist = this->max_world_extent;
while(HC_Find_Related_Selection())
{
HC_Show_Selection_Position(&window_x, &window_y, &window_z, &camera_x, &camera_y, &camera_z);
this->view->GetCamera(&this->cam);
// Compute distance vector
this->dist.Set(cam.position.x - camera_x, cam.position.y - camera_y, cam.position.z - camera_z);
dist_length = sqrt(pow((cam.position.x - camera_x), 2) + pow((cam.position.y - camera_y), 2) + pow((cam.position.z - camera_z), 2));
if(dist_length < shortest_dist)
shortest_dist = dist_length;
}
// Reduced computation
// Compute damping factor
damping_factor = ((1 - 8) / (this->max_world_extent - 1)) * (shortest_dist - 1) + 8;
// Difference to big? (Gap)
if(qFabs(damping_factor - damping_factor * 0.7) < qFabs(damping_factor - this->last_damping_factor))
{
this->smooth_damping = true;
this->damping_factor_to_reach = damping_factor; // this is the new damping factor we have to reach
this->freezed_damping_factor = this->last_damping_factor; // damping factor before gap.
if(this->last_damping_factor > damping_factor) // Negative acceleration
{
this->acceleration = false;
}
else // Acceleration
{
this->acceleration = true;
}
}
else
{
this->last_damping_factor = damping_factor;
}
}
}
else
{
if(this->acceleration)
{
if(this->freezed_damping_factor -= 0.2 >= 1);
damping_factor = this->freezed_damping_factor +
(((this->damping_factor_to_reach - this->freezed_damping_factor) / 2) -
((this->damping_factor_to_reach - this->freezed_damping_factor) / 2) *
qCos(M_PI * this->damping_step)); // cosine function between freezed and to reach
this->last_damping_factor = damping_factor;
if(damping_factor >= this->damping_factor_to_reach)
{
this->smooth_damping = false;
this->damping_step = 0;
this->freezed_damping_factor = 0;
} // Reset
}
else
{
if(this->freezed_damping_factor += 0.2 >= 1);
damping_factor = this->damping_factor_to_reach +
((this->freezed_damping_factor - this->damping_factor_to_reach) -
(((this->freezed_damping_factor - this->damping_factor_to_reach) / 2) -
((this->freezed_damping_factor - this->damping_factor_to_reach) / 2) *
qCos(M_PI * this->damping_step))); // cosine functio between to reach and freezed
this->last_damping_factor = damping_factor;
if(damping_factor <= this->damping_factor_to_reach)
{
this->smooth_damping = false;
this->damping_step = 0;
this->freezed_damping_factor = 0;
} // Reset
}
this->damping_step += 0.01; // Increase the "X"
}
I've never used the HOOPS engine, but do you have any way to get the closest object to the camera? You could scale your speed with this value, so your camera gets slower close to objects.
Even better would be to take the closest point on bounding-box instead of center of object. This would improve the behaviour close to big objects like long walls/floor.
Another solution I'd try would be to raycast through the view center to look for the first object and use the distance the same way. In this approach you'll not be slowed down by objects behind you. You may also add additional raycast points, like in 1/4 of screen and blend resulting values, so you have more constant speed scale.
What I understand from your question is that you want a way to steer camera through large scenes, like an airport and still be able to move slowly close to the objects. I don't think there's some 'universal' way of doing it. All will depend on your Engine/API features and specific needs. If all those solutions doesn't work, maybe you should try with paper and pen ;) .
You said that m_speed is dependent on scene extent, I guess this dependency is linear (by linear I mean if you are measuring the scene extend with something called sExtent, m_speed equals c*sExtent that c is some constant coefficient).
So to make m_speed dependent on scene extent but avoid huge m_speed for large scale scenes , I suggest to make dependency of m_speed to sExtent non-linear, like logarithmic dependency:
m_speed = c*log(sExtent+1)
This way your m_speed will be bigger if scene is bigger, but not in the same ratio. you also can use radical to create non-linear dependency. below you can compare these functions:

What is the method to detect whether a given picture is human face or not?

Is there any simple algorithm to judge whether a given image is face or something else (without training hopefully)?
My thought is to construct the eigenvectors of each image, then apply some clustering method (for example k-means with k = 2). But I'm not sure what will be the best criteria to distinguish face/non-face even if a good clustering result is obtained?
Eigen decomposition reduces dimensionality in continues domain by finding directions in data space with high variance. K-means finds clusters in space with high density of points. You kind of mixing them together while completely ignoring how would you arrive at the face features on the first place (how would you scale, rotate and crop whatever you want to inspect either).
You don’t need to train Haar detectors since they are already trained for faces. They detect a face, not recognize its identity. ALl you need is to port the code together with a little file with parameters obtained after training (that was already performed) as Shiva suggested above.
Thoughtless copy-pasting of the code doesn’t make much sense though. Read a bit about Haar. Try to understand
Why they work - faces have features most pronounced on the intermediate spatial scale such as eyes, nose, brows. Too small (size of the pupil) or too large (size of the whole face) features are less useful.
why Haars are preferred to wavelets or Gabors - Haars are just raw (boxy) approximations of Gabors but since they can be quickly calculated with Integral images they are preferred to more precise but slower counterparts;
what are the restrictions - Haars have their own spatial scale and orientation but can be quickly recalculated for another scale.
How to train Haar classifier (the most exciting topic you are trying to avoid). Ada boost is the one way to train a more complex classifier consisting of several Haars. Finally to speed up processing you can ask a slightly different question instead of find me a face. Namely, you can try to quickly eliminate the areas in the image that cannot be a face. This is called a cascade classification. Study these aspects in a systematic way and you will learn more about face detection than you’d do from the code pasting.
You can use Haar classifier method for face detection in an image/video frame.
A sample code for finding faces in an images will be like this
int main(int argc, _TCHAR* argv[])
{
IplImage* img;
img = cvLoadImage( "dasl_hubo.jpg" );
CvMemStorage* storage = cvCreateMemStorage(0);
// Note that you must copy C:\Program Files\OpenCV\data\haarcascades\haarcascade_frontalface_alt2.xml or where opencv is installed
// to your working directory
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad( "haarcascade_frontalface_alt2.xml" );
double scale = 1.3;
static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}},
{{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} };// this will draw rectangles of these colors around the detected faces.
// Detect objects
cvClearMemStorage( storage );
CvSeq* objects = cvHaarDetectObjects( img, cascade, storage, 1.1, 4, 0, cvSize( 40, 50 ));
CvRect* r;
// Loop through objects and draw boxes
for( int i = 0; i < (objects ? objects->total : 0 ); i++ ){
r = ( CvRect* )cvGetSeqElem( objects, i );
cvRectangle( img, cvPoint( r->x, r->y ), cvPoint( r->x + r->width, r->y + r->height ),
colors[i%8]);
}
cvNamedWindow( "Output" );
cvShowImage( "Output", img );
cvWaitKey();
cvReleaseImage( &img );
return 0;
}
visit these links to find more about face detection using harr cascades
drexel.edu
opencv documentation
presentation on Harr training and usages
Here is my opencv code in C++, it is simple to detect faces in an image with the help of Opencv haar-like feature, you may refer to the documents for the usage of some methods in it. I hope it helps.
CascadeClassifier face_cascade; //for read in haar-like faces database in opencv
std::vector<Rect> faces; //for storing detected faces
vector<Point2d> FaceCenter; //for storing centres of faces
Mat frame_gray = imread(“/Users/xxx/Desktop/xxx.jpg”, CV_8UC1); //read the image in gray-scale;
equalizeHist( frame_gray, frame_gray ); //histogram to extract the contrast
String face_cascade_name = "/Users/xxx/opencv-2.4.7/data/haarcascades/haarcascade_frontalface_alt.xml"; //path of the trained faces .xml file
if(!face_cascade.load(face_cascade_name)) //load the .xml
{
cout << "face_casacade.xml load error" << endl;
}
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0, Size(50, 50) ); //Detect faces in the image
for(size_t i = 0; i < faces.size(); i++)
{
Point2d center(faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5); //store centres of faces
FaceCenter.push_back(center);
int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 ); //circle the faces in the image, optional
ellipse( frame_gray, center, Size( eyes[j].width*0.5, eyes[j].height*0.25), 0, 0, 360, Scalar( 255, 0, 0 ), 2, 8, 0 );
}
imshow(“Faces Detection”, frame_gray); //show the result

Resources