The Rubik's Cube I've modeled using STL parts is lighted from all 6 directions with directional lights. As can be seen, the recessed areas are more lighted than the surfaces. When I load and position the STL files into Blender, rendering is fine. So I think the files are OK.
So, how can I fix lighting? Note that setting castShadows/receiveShadows on light and materials (phong/lambert/standard) doesn't seem to change anything.
Code is at https://github.com/ittayd/rubiks_trainer/blob/master/js/three-cube.mjs#L134
I think what you're seeing makes sense. When a face is pointing straight down the axis they're only affected by the light in front of them. But when a face is halfway between axes, it's affected by more than one light, creating an additive effect. You could minimize this by adding shadows, but creating a new shadowmap for 3 lights is expensive.
The Rubik's cube has 26 pieces * 3 lights = 78 drawcalls
+26 pieces for the final render = 104 drawcalls per frame!
I recommend you just bake an ambient occlusion map with Blender, then use that in your material with .aoMap to simulate those darker crevasses very cheaply, and keep your performance smooth. Once you have your AO map, you can just use a single ambientLight to illuminate everything, without needing 5 different lights (More lights = more expensive renders).
Related
I want to light up everything around the player in a similar way how directional light does with the whole scene. Since I'm working on an top-down view project, I should be able to see an elipse of light on the ground
I have tried to achieve the effect by using 2 layers: one for everything except area-to-be-lit, and another one containing area-to-be-lit and an extra light source:
new three.SpotLight(0xffffff, .6, 0, Math.PI / 3, 0, 0);
The light is always on the top of the player and the player is its target
It works, but there are 2 problems: it is not directional light, so I can't achieve those good-looking shades on different faces; and the light isn't making an elipse all the time:
When the light is casted on imported models, it is indeed a nice elipse on the ground around the player, but for programatically made geometries, it doesn't seem to be the case. My guess is that one face can only have 1 color applied from the light source, but making the geometry more complex will add additional overhead on top of the proper light calculations
I was thinking if it's possible to limit the directional light to affect just inside a sphere
I think you can make use of Layers : https://threejs.org/docs/#api/en/core/Layers
Also, try using a light from an HDR (Check PMREM Generator), it could give nice results that you can combine with your light and make the scene look better.
I have been working on programming a game where everything is rendered in 3d. Though the bullets are 2d sprites. this poses a problem. I have to rotate the bullet sprite by rotating the material. This turns every bullet possessing that material rather than the individual sprite I want to turn. It is also kind of inefficient to create a new sprite clone for every bullet. is there a better way to do this? Thanks in advance.
Rotate the sprite itself instead of the texture.
edit:
as OP mentioned.. the spritematerial controls the sprites rotation.y, so setting it manually does nothing...
So instead of using the Sprite type, you could use a regular planegeometry mesh with a meshbasic material or similar, and update the matrices yourself to both keep the sprite facing the camera, and rotated toward its trajectory..
Then at least you can share the material amongst all instances.
Then the performance bottleneck becomes the number of drawcalls.. (1 per sprite)..
You can improve on that by using a single BufferGeometry, and computing the 4 screen space vertices for each sprite, each frame. This moves the bottleneck away from drawCalls, and will be limited by the speed at which you can transform vertices in javascript, which is slow but not the end of the world. This is also how many THREE.js particle systems are implemented.
The next step beyond that is to use a custom vertex shader to do the heavy vertex computation.. you still update the buffergeometry each frame, but instead of transforming verts, you're just writing the position of the sprite into each of the 4 verts, and letting the vertex shader take care of figuring out which of the 4 verts it's transforming (possibly based on the UV coordinate, or stored in one of the vertex color channels..., .r for instace) and which sprite to render from your sprite atlas (a single texture/canvas with all your sprites layed out on a grid) encoded in the .g of the vertex color..
The next step beyond that, is to not update the BufferGeometry every frame, but store both position and velocity of the sprite in the vertex data.. and only pass a time offset uniform into the vertex shader.. then the vertex shader can handle integrating the sprite position over a longer time period. This only works for sprites that have deterministic behavior, or behavior that can be derived from a texture data source like a noise texture or warping texture. Things like smoke, explosions, etc.
You can extend these techniques to draw gigantic scrolling tilemaps. I've used these techniques to make multilayer scrolling/zoomable hexmaps that were 2048 hexes square, (which is a pretty huge map)(~4m triangles). with multiple layers of sprites on top of that, at 60hz.
Here the original stemkoski particle system for reference:
http://stemkoski.github.io/Three.js/Particle-Engine.html
and:
https://stemkoski.github.io/Three.js/ParticleSystem-Dynamic.html
I needed to refactor my custom mesh creation a bit
from:
create mesh of unified sizes (SIZE,SIZE,SIZE), than scale them as needed (setting scale for each axis)
to:
create mesh with correct size, do not scale later
meshes are custom generated (vertices, faces, normals, uvs), nothing of this process was altered, worked like a charm before
=> resulting meshes are the same size, position, etc.
The whole scene setup stays the same: lights, shadowing, materials, yet when using the second approach the whole lighting is very very bright and super reflective, is that a known issue?
material used is MeshPhongMaterial with map, bumMap, specMap, envMap
using three.js r68, no error/warning in console
before:
https://cloud.githubusercontent.com/assets/3647854/3876053/76b8f260-2158-11e4-9e96-c8de55eaec9a.png
after:
https://cloud.githubusercontent.com/assets/3647854/3876052/76b7fa86-2158-11e4-9393-8f3eece04c0b.png
Did you rescale the normals in the mesh?
The mesh format probably needs normalized normals, in which case, the new normals are now incorrect, but would've been correct, if you hadn't rescaled.
Alternately, you say the lights haven't been changed, maybe they need to be appropriately redirected in the scene. (Assuming you're applying different scaling factors in each axis.)
In a three.js project (viewable here) I have 500 cubes, all of the same size and all statically positioned. On each of these cubes, five of the faces always remain the same color; however, the color of the sixth face can be dynamically updated, and this modification occurs across many of the cubes in a single frame and also occurs across most frames.
I've been able to implement this scene several different ways, but I have not been completely satisfied with the performance of anything I've tried. I know I must not have hit upon the right technique yet or maybe I'm not implementing one quite right. From a performance standpoint, what is the best way to change the color of these cube faces while maintaining independence across each of the cubes?
Here is what I have tried so far:
Create 500 individual CubeGeometry and Mesh instances. Change the color of a geometry face as described in the answer here: Change the colors of a cube's faces. So far this method has performed the best for me, but 500 identical geometries seems less than ideal, especially because I'm not able to achieve a regular 60fps with a good GPU. Rendering takes about 11-20ms here.
Create one CubeGeometry and use it across 500 Mesh instances. Create an array of MeshBasicMaterials to create a MeshFaceMaterial for each Mesh. Five of the MeshBasicMaterial instances are the same, representing the five statically colored sides of each cube. Create a unique MeshBasicMaterial to add to the MeshFaceMaterial for each Mesh. Update the color of this unique material with thisMesh.material.materials[3].uniforms.diffuse.value.copy(newColor). This method renders quite slower than the first method, 90-110ms, which seems surprising to me. Maybe it's because 500 cubes with 6 materials each = 3000 materials to process???
Any advice you can offer would be much appreciated!
I discovered that three.js performs a WebGL draw for each mesh in your scene, and this is what was really hurting my performance. I looked into yaku's suggestion of using BufferGeometry, which I'm sure would be a great route, but using BufferGeometry appears to be relatively difficult unless you have a good amount of experience with WebGL/OpenGL.
However, I came across an alternative solution that was incredibly effective. I still created individual meshes for each of my 500 cubes, but then I used GeometryUtils.merge() to merge each of those meshes into a generic geometry to represent the entire group of cubes. I then used that group geometry to create a group mesh. An explanation of GeometryUtils.merge() is here.
What's especially nice about this tactic is that you still have access to all the faces that were part of the underlying geometries/meshes that you merge. In my project, this allowed me to still have full control over the face colors that I wanted control over:
// For 500 merged cubes, there will be 3000 faces in the geometry.
// This code will get the fourth face (index 3) of any cube.
_mergedCubesMesh.geometry.faces[(cubeIdx * 6) + 3].color
So, I was reading about clipping in this Wikipedia article. It seems pretty essential to any and all games, so, do I have to do it, or is it done automatically by Three.js, or even WebGL? Thanks!
You can pass values for the near and far clipping planes to your camera object:
var camera = new THREE.PerspectiveCamera( fov, aspect, near, far );
near and far can contain values for example near = 0.1 and far = 10000 so an object which lies between these values will be rendered.
EDIT:
near and far, represent the clipping planes for your world. In a scene with thousands of objects and textures being drawn at once, it would be taxing on the CPU and GPU to try and show everything. Even worse, it would be wasteful to draw the things you cant even see. The near clipping plane is usually relatively close to the user, whereas the far clipping plane is somewhere off in the distance. As objects cross the far plane, they spontaneously appear or disappear. Some games use fog to make the appearance and disappearance of objects more realistic.