Physijs - ConvexMesh wall collision detection issue - three.js

I'm using Three.js and Physijs. I have a wall that should act as a boundary, but objects (especially boxes) often pass through it, if the force is sufficient. The collision is detected, as they do not do so cleanly, but they start spinning or bounce in some direction. Is there a way to increase the maximum force with which the wall can act on the colliding object?
All four of the wall's points are on the same plane, forming a rectangle. The mesh consists of two large triangular faces. I'm using a ConvexMesh.
Breaking the two triangles into many smaller ones does not alleviate the problem.
I can confirm the normals are fine, as the wall is shaded correctly.
How can I solve this without converting the wall into a BoxMesh?
I'll also appreciate an explanation of why this happens. I'm guessing that the engine limits the maximum force that collisions can apply.

I think it's Motion Clamping
https://github.com/chandlerprall/Physijs/wiki/Collisions
When an object has a high velocity, collisions can be missed if it
moves through and past other objects between simulation steps. To fix
this, enable CCD motion clamping. For a cube of size 1 try:
// Enable CCD if the object moves more than 1 meter in one simulation
frame mesh.setCcdMotionThreshold(1);
// Set the radius of the embedded sphere such that it is smaller than
the object mesh.setCcdSweptSphereRadius(0.2);
Hope this works ima try it now

Related

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Is it possible to use GIS terrain vector data in three.js?

I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.

How to rotate circle in response to collision?

I am in the process of developing a very simple physics engine. The only non-static objects in it will be circles and the only collision detection I will be performing is between circles and line pieces.
For the purpose I am utilizing the principals described in Advanced Character Physics. That is, I do integration by using a simple Verlet integrator. I perform collision detection and response simply by calculating the distance between the circles and the line pieces and in case that the distance is less than the cirles radius I project the circle out of the line piece.
This works very well and the result is a practically perfect moving circle. The current state of the engine can be seen here: http://jsfiddle.net/8K4Wj/. This however, also shows the one major problem I am facing: The circle does not rotate at all.
As far as I can figure out there is three different collision cases that will have to be dealt with seperately:
When the circle is colliding with a line vertex and is not rolling along the line.
When the circle has just hit or rolled of a line. Then the exact point of impact will have to be calculated (how?) and the circle is rotated according to the distance between the impact position and the projected position.
When the circle is rolling along a line. Then is it simply rotated according to the distance traveled since last frame.
Here is the closest I have got to solving the problem: http://jsfiddle.net/vYjzt/. But as the demo shows it doesn't handle the edge cases probably.
I have searched for a solution online but I can not find any material that deals with the given problem specifically (as I said the physics engine is relatively simple and I do not want to bother with complex physic simulation concepts).
What looks wrong in your demo is that you're not considering angular moment and energy when determining the motion.
For the easy case, when the wheel is dropping to the floor in your demo, it stops spinning while in free fall. Angular momentum should keep it going.
A more complicated situation is when the wheel finally lands on the floor, it moves with the same horizontal velocity it had before hitting floor. This would be correct if it wasn't rolling but since it is rolling, some of the kinetic energy will have to go into the spinning motion, and this should slow it down. As a more clear example of this, consider the opposite case where the wheel is spinning quickly but has no linear momentum. When this wheel is set on the floor, it should take off and the spinning should slow. Also, for example, as the wheel rolls down a hill, it accelerates more slowly because the energy needs to go into both linear and circular motion.
It's not too hard to do, but to show a rolling object in a way that looks intuitively correct, I think you'll need to consider the kinetic energy and angular momentum associated with rolling. By "not too hard", I mean that all of your equations will essentially but twice as long, with one term for linear motion and another for angular. I won't recite all of the equations, it's basically just the chapter in rotational motion from any physics text.
(Nice demo, btw!)

Box backface culling

Assume I have a camera defined by its position and direction, and a box defined by its center and extents (three orthogonal vectors from the box center to face centers). Face is visible when its outer surface is facing the camera and invisible when its inner surface is facing it.
It seems obvious that depending on box position and orientation there may be 1-3 faces of the box visible. Is there some clever way how to determine which faces are visible? An obvious solution would be to compute 6 dot-products of the face normal against the face-camera vector for each face. Is there a better way?
Note: perspective projection will be used, but I do not think it matters, the property of "facing camera" seems independent to a projecting.
I believe the method you described is the normal way to do this. It's a very fast calculation so you shouldn't be worried too much about speed. This is the same method they use to reduce the number of calculations for ray-triangle intersection algorithms. If the front of the face isn't visible, the method doesn't continue calculations for that face. See this paper for a c++ implementation of this algorithm. It's in the first half of the calculations. http://jgt.akpeters.com/papers/MollerTrumbore97/code.html
The only cleverness is that if a face of the cube is visible, the opposing face definitely isn't. At least in a regular perspective projection.
Note that the opposite might not be true: if a face is invisible, the opposing face might be invisible too. This is because the type of projection does matter. Imagine the cube being really up close to the camera, which is looking straight at one face. Then rotate the cube slightly, and while with a parallel projection, another face would immediately become visible, in a perspective projection this doesn't happen.

Resources