I've been working at a game which is set in space meaning that the player can move through the solar system.
The issue comes when the player travels further away, and gets Float32 precision issues.
I've been searching for a few hours to find a fix for this, but nothing helped so far.
What I also tried was to rescale all the meshes to be tiny.. about 100 times smaller than their initial scale, but that behaves the same when reaching larger coordinates.
Another solution would be to translate the world position, not the player, which will do the job.. but I honestly have no clue how to achieve this without changing each mesh position.
I've also set the renderer to use { logarithmicDepthBuffer: true} but that still wont help me.. the player model starts jumping, flickering.
I spent alot of time by trying to find a solution to help me with this issue, so I appreciate any kind of advice.
To move your scene you can use:
scene.translateX(i);
scene.translateY(i);
scene.translateZ(i);
Where i is the increment from the existing position offset. This can give you the illusion of an first person movement.
This is a common solution to very large scenes.
Related
Ive wrote a simple 3d dungeon generator using Threejs, but since i use alot of spotlights for torches in the dungeon im starting to get a FPS drop.
I know the light is the problem but before i tackle the light issue, i was thinking it could be possible to optimize the level. The level is made only by using a plane 200x200 with a wall texture. Ive read about instancing, is that what i want in this scenario ? Walls wont move. If they move i can make separate meshes for the moving ones.
For the lights im using LambertMaterial, should be the fastest one, but besides that ive done nothing to improve performance on that matter. Ive tryed to bake the room lights into the textures with this https://github.com/mem1b/lightbaking but failed.
So in the end, Is instancing the aproach to optimize the level polygons ? I read lttle bout it could not fully understand.
Say you have 100 torches distributed on a flat wall. Each one of them pretty much only affects the part of the wall that it's closest to, area wise this would be 1/100th of the wall.
Now what if you divide this wall into a struct wall segment + torch. Instead of giving the global scene all 100 lights, you'd give each light segment a single local light.
Cool, now if you render the entire wall, you get 100 lights at the cost of one... computationally, but you also introduce 100 draw calls instead of one.
Instancing would help here. You could instance your wall segment geometry, and then for each instance set a light attribute instead of a uniform.
This way you could draw 100 lights on 100 light segments in one draw call. This would be much faster than drawing stuff 100 times.
When using DeviceOrientationControls, I need to allow the user to reset their view to an arbitrary direction. Basically if I'm sitting in a chair with limited range of head motion, I want to allow the camera to switch to a different direction (how I trigger that change is not important).
alphaOffsetAngle works great for resetting the view to look left, right, or behind, but not for looking up or down (or left/right, but rotated).
I tried adding offset angles for Beta and Gamma, but that isn't as straightforward as I hoped. I also tried adding the camera to an Object3D and rotating the parent. That sortof worked, but the controls got all wonky when the camera's parent was rotated.
lookAt() is pretty much what I want, but the DeviceOrientationControls update() seems to blow that away.
Does anyone have a working example of this arbitrary camera direction with the deviceorientationcontrols?
This question is similar these, but I have not found a workable solution:
Add offset to DeviceOrientationControls in three.js
and:
DeviceOrientationControls.js - Calibration to ideal starting center
When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.
The situation: I need to be able to track a hovering drone's translation (not height) and rotation over the ground using a downwards facing camera. I wouldn't know where to start looking. Can anyone with experience point me to some theory or resources? I'm looking for the type of algorithms a mouse would use but am not having much luck so far. Most results detail tracking an object in a fixed frame. In my case the environment is relatively static and the camera moves.
I'm currently drawing a 3D solar system and I'm trying to draw the path of the orbits of the planets. The calculated data is correct in 3D space but when I go towards Pluto, the orbit line shakes all over the place until the camera has come to a complete stop. I don't think this is unique to this particular planet but given the distance the camera has to travel I think its more visible at this range.
I suspect its something to do with the frustum but I've been plugging values into each of the components and I can't seem to find a solution. To see anything I'm having to use very small numbers (E-5 magnitude) for the planet and nearby orbit points but then up to E+2 magnitude for the further regions (maybe I need to draw it twice with different frustums?)
Any help greatly appreciated...
Thanks all for answering but my solution to this was to draw it with the same matrices that were drawing the planet since it wasn't bouncing around as well. So the solution really is to code better really, sorry.