How can I light emission per vertex and per vertex lighting in ThreeJS? - three.js

I want to see a chart with color specified per vertex and to get little bit of shading too.
But if I use MeshBasicMaterial I only get VertexColor with no dynamic shading.
On the other hand, if I use MeshPhongMaterial I just get shading but without emissiveness from my vertex colors.

As the THREE.JS PhongMaterial supports vertexColors, giving you a nice combination of dynamic lighting and vertex colors, I'm not quite sure I understand your question. Perhaps that is something you should investigate more?
However, as an alternative to writing a custom shader you could try rendering your model in multiple passes.
This will not give you as much control over the way the vertex colors and phong lighting are combined as a shader would, but often a simple add/multiply blend can give pretty decent results.
Algorithm:
- create two meshes for the BufferGeometry, one with the BasicMaterial and one with the PhongMaterial
- for the PhongMaterial, set
depthFunc = THREE.EqualDepth
transparent = true;
blending = THREE.AdditiveBlending(or MultiplyBlending)
- render the first mesh
- render the second mesh at the exact same spot

Related

Lights on BufferGeometry not correct

I got a BufferGeometry that uses MeshLambertMaterial and VertexColors. When I apply lights as the gif below shows, the lights gets distorted when the BufferGeometry consists of different sizes of faces with the same color. If I use different colors for each face with smaller faces (1x1) the lights looks good. I've tried to calculate faceNormals but that doesn't solve the issue.
Anything I miss?
Here is a gif showing the issue
You are using vertex lighting, instead you probably want per pixel lighting.
https://en.wikipedia.org/wiki/Per-pixel_lighting
http://www.learnopengles.com/tag/per-vertex-lighting/
It is my understanding that three.js almost exclusively focuses on PBR lighting/shaders. With this mindset, i'm not even sure what lambert should be used for. Either way, Lambert only supports per vertex lighting, not per pixel, so you will always get these artifacts from interpolating against different topologies. There are no limitations that prevent this from working different, it's just by design.
MeshPhongMaterial on the other hand does per-pixel lighting, but because of all the physical correctness, you might have a hard time removing the specular term, leaving only the lambert.
If you opt for this, you might find yourself having to do something like this
var myBlackTexture = obtainTextureThatIsBlack()
var myMaterial = new THREE.MeshPhongMaterial({... specularMap: myBlackTexture})
https://github.com/mrdoob/three.js/issues/10808
Summary:
Anything I miss?
You missed the arbitrary three design caveats :) It will remain a mistery why this material exists as is, and why it doesn't just have a flag to flip between vertex/fragment lighting.

Emit light from a geometry in Three.js

Is it possible to have a custom geometry emit light in Three.js?
There is a similar question from 5 years ago here.
In my particular case, I have created a TorusGeometry. I would like this torus to also give off light. Is that possible?
The only true way to do this is raytracing, in which case your torus becomes an "emitter" of photons and its geometry is used to calculate the initial directions of said photons.
Otherwise, light technically doesn't exist. Only (mathematical) descriptions of lights exist. (Remember, lights aren't visible/aren't rendered unless you're using a LightHelper.) These descriptions are used by material shaders, which use the light descriptions (and other objects in the scene, in the case of shadows) to determine the color the current fragment should contribute to a pixel.
With this in mind, if you could write a shader to handle a torus-shaped light, then all you need to do is provide that light's information to the shader. You can do this by extending a THREE.js light class to make your own TorusLight to add to the scene, then give the objects in your scene your custom shader.
THAT SAID, if you'd be satisfied with simulating the torus light, and want a visible torus, you can always add a PointLight at the position of your torus (or several throughout the body of the torus), and give your torus some kind of glow effect.

Changing scale of mesh changes lighting brightness of scene?

I needed to refactor my custom mesh creation a bit
from:
create mesh of unified sizes (SIZE,SIZE,SIZE), than scale them as needed (setting scale for each axis)
to:
create mesh with correct size, do not scale later
meshes are custom generated (vertices, faces, normals, uvs), nothing of this process was altered, worked like a charm before
=> resulting meshes are the same size, position, etc.
The whole scene setup stays the same: lights, shadowing, materials, yet when using the second approach the whole lighting is very very bright and super reflective, is that a known issue?
material used is MeshPhongMaterial with map, bumMap, specMap, envMap
using three.js r68, no error/warning in console
before:
https://cloud.githubusercontent.com/assets/3647854/3876053/76b8f260-2158-11e4-9e96-c8de55eaec9a.png
after:
https://cloud.githubusercontent.com/assets/3647854/3876052/76b7fa86-2158-11e4-9393-8f3eece04c0b.png
Did you rescale the normals in the mesh?
The mesh format probably needs normalized normals, in which case, the new normals are now incorrect, but would've been correct, if you hadn't rescaled.
Alternately, you say the lights haven't been changed, maybe they need to be appropriately redirected in the scene. (Assuming you're applying different scaling factors in each axis.)

Fixed texture size in Three.js

I am building quite a complex 3D environment in Three.js (FPS-a-like). For this purpose I wanted to structure the loading of textures and materials in an object oriƫnted way. For example; materials.wood.brownplank is a reusable material with a certain texture and other properties. Below is a simplified visualisation of the process where models uses materials and materials uses textures.
loadTextures();
loadMaterials();
loadModels();
//start doing stuff in the scene
I want to use that material on differently sized objects. However, in Three.js you can't (AFAIK) set a certain texture scale. You will have to set the repeat to scale it appropiate to your object. But I don't want to do that for every plane of every object I use.
Here is how it looks now
As you can see, the textures are not uniform in size.
Is there an easy way achieve this? So cloning the texture and/or material every time and setting the repeat according to the geometry won't do :)
I hope someone can help me.
Conclusion:
There is no real easy way to do this. I ended up changing my loading methods, where things like materials.wood.brownplank are now for example getMaterial('wood', 'brownplank') In the function new objects are instantiated
You should be able to do this by modifying your geometry UV coordinates according to the "real" dimensions of each face.
In Three.js, UV coordinates are relative to the face and texture (as in, 0.0 = one edge, 1.0 = other edge), no matter what the actual size of texture or face is. But by modifying the UVs in geometry (multiply them by some factor based on face physical size), you can use the same material and texture in different sizes (and orientations) per face.
You just need to figure out the mapping between UVs, geometry scale and your desired working units (eg. mm or m). Sorry I don't have, or know a ready algorithm to do it, but that's the approach you probably need to take. Should be quite doable with a bit of experimentation and google-fu.

OpenGL ES 2.0 Per Fragment Lighting for Untextured Generated Geometry

I'm currently generating geometry rather than importing it as a model. This makes it necessary to calculate all normals within the application.
I've implemented Gouraud shading (per vertex lighting) successfully, and now wish to implement Phong shading (per fragment/pixel).
I've had a look at relevant tutorials online and there are two camps: one offers a simple Gouraud-to-Phong reshuffling of shader code which, while offering improved lighting, isn't truly per-pixel. The second does things the right way by utilising normal maps embedded within textures, but these are generated within a modelling toolkit such as RenderMonkey.
My questions are:
How I should go about programmatically generating normals for my
generated geometry, considered it as a vertex set? In other words, given a set of discrete polygonal points, will it be necessary to manually calculated interpolated normals?
Should I store generated normals within a texture as exemplified
online, and if so how would I go about doing this within code rather
than through modelling software?
Computing the lighting in the fragment shader on the intepolated per-vertex normals will definitely yield better results (assuming you properly re-normalize the interpolated normals in the fragment shader) and it is truly per-pixel. Although the strength of the difference may very depending on the model tessellation and the lighting variation. Have you just tried it out?
As long as you don't have any changing normals inside a face (think of bump mapping) and only interpolate the per-vertex normals, a normal map is completely unneccessary, as you get interpolated normals from the rasterizer anyway. Whereas normal mapping can give nicer effects if you really have per-pixel normal variations (like a very rough surface), it is not neccessarily the right way to do per-pixel lighting.
It makes a huge difference if you compute the lighting per-vertex and interpolate the colors or if you compute the lighting per fragment (even if you just interpolate the per-vertex normals, that's what classical Phong shading is about), especially when you have quite large triangles or very shiny surfaces (very high frequency lighting variation).
Like said, if you don't have high-frequency normal variations (changing normals inside a triangle), you don't need a normal map and neither interpolate the per-vertex normals yourself. You just generate per-vertex normals like you did for the per-vertex lighting (e.g. by averaging adjacent face normals). The rasterizer does the interpolation for you.
You should first try out simple per-pixel lighting before delving into techniques like normal mapping. If you got not so finely tessellated geometry or very shiny surfaces, you will surely see the difference to simple per-vertex lighting. Then when this works you can try normal mapping techniques, but for them to work you surely need to first understand the meaning of per-pixel lighting and Phong shading in contrast to Gouraud shading.
Normal maps are not a requirement for per-pixel lighting.
The only requirement, by definition, is that the lighting solution is evaluated for every output pixel/fragment. You can store the normals on the vertexes just as well (and more easily).
Normal maps can either provide full normal data (rgb maps) or simply modulate the stored vertex normals (du/dv maps, appear red/blue). The latter form is perhaps more common and relies on vertex normals to function.
To generate the normals depends on your code and geometry. Typically, you use dot products and surrounding faces or vertexes for smooth normals, or just create a unit vector pointing in whatever is "out" for your geometry.

Resources