threejs sketching tool with cube geometry - three.js

I am working on a sketching tool with the help of threejs. This tool should allow users drawing cubes at any direction. I partially achieved this but still when I scale an object at negative direction the face colors get inverted. I am looking for a solution to avoid color inversion. mean the cube should be the same at both positive and negative scaling.
Please kindly help..!!
Thanks in advance.
Scaling at positive direction.
Scaling at negative direction.

If scaling negatively has unwanted artifacts, why don't you just avoid doing it? Your cubical meshes are symmetrical, so there no desired behaviour as far as I can tell.
In other words, display -50, but scale by the absolute value (50).
scale.set(Math.abs(scale)...)
If you really -do- need geometry flipping, take a look at this answer.

Since you draw cube and scale it lower than, it vertices are being drawn inside. you can set box's material as double sided.
I assume you are using MeshBasicMaterial
var material = new THREE.MeshBasicMaterial({side:THREE.DoubleSide});
var box = new THREE.Mesh(boxGeometryInstance, material);

Related

How to calculate near plane of camera in three.js with model bounding box

I am creating a viewer using three.js and found that setting camera near and far plane to fixed values is causing flickering for some 3d models.
I see that this is due to the fact that GPU is running out of precision for model having bounding box length around 4000-5000.
Near plane is currently set to 0.1 and far to 20000.
You can move up your near plane to get more resolution. Maybe 1.0...
Another option to be aware of is logarithmic depth buffer:
https://threejs.org/examples/webgl_camera_logarithmicdepthbuffer.html
You can get the bounding box of the mesh via its geometry... geometry.boundingBox and geometry.boundingSphere .. sometimes you need to recalculate them using mesh.geometry.computeBoundingBox and computeBoundingSphere...
To get the bounding box in camera space is a bit tricky.. I don't know of a super optimal one-liner to do it, but someone else may weigh in...
a brute force way would be to transform the mesh vertices to screen space..
Maybe something like:
var gclone = mesh.geometry.clone();
for(var i=0;i<geometry.vertices.length;i++)
gclone.vertices[i].applyMatrix4(mesh.matrixWorld).project(camera)
gclone.computeBoundingBox()
var zExtent = gclone.boundingBox.max.z-gclone.boundingBox.min.z

Three.js: point light strange spot on the ground

Example here
I'm using PointLight in scene and ground reflects big light spot. What should I do to remove it? I only need to highlight some area on scene. So, I can decrease distance but it only decreasing size of spot. Not exactly I've expected. Should I use specific material on the ground or something like this?
Answer is very easy: set ground material's roughness to 1. I've updated example to see how roughness changes material.

Box backface culling

Assume I have a camera defined by its position and direction, and a box defined by its center and extents (three orthogonal vectors from the box center to face centers). Face is visible when its outer surface is facing the camera and invisible when its inner surface is facing it.
It seems obvious that depending on box position and orientation there may be 1-3 faces of the box visible. Is there some clever way how to determine which faces are visible? An obvious solution would be to compute 6 dot-products of the face normal against the face-camera vector for each face. Is there a better way?
Note: perspective projection will be used, but I do not think it matters, the property of "facing camera" seems independent to a projecting.
I believe the method you described is the normal way to do this. It's a very fast calculation so you shouldn't be worried too much about speed. This is the same method they use to reduce the number of calculations for ray-triangle intersection algorithms. If the front of the face isn't visible, the method doesn't continue calculations for that face. See this paper for a c++ implementation of this algorithm. It's in the first half of the calculations. http://jgt.akpeters.com/papers/MollerTrumbore97/code.html
The only cleverness is that if a face of the cube is visible, the opposing face definitely isn't. At least in a regular perspective projection.
Note that the opposite might not be true: if a face is invisible, the opposing face might be invisible too. This is because the type of projection does matter. Imagine the cube being really up close to the camera, which is looking straight at one face. Then rotate the cube slightly, and while with a parallel projection, another face would immediately become visible, in a perspective projection this doesn't happen.

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

How to generate one texture from N textures?

Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.
Generate texture-mapping coords for your geometry
Generate a big blank texture
For each pixel
Figure out the point on the geometry it maps to
Figure out the pixel in each image that projects onto this point
Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera
Apply your completed texture to the geometry
Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.
Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.

Resources