In a huge scene (~10k objects), I have a moving light. All objects can cast a shadow and receive shadow. At any given time (i.e. location of the light), the objective is to know which all objects are shaded (and not just which all capable of receiving shading). Is there a way for the same?
Light being used is directional light and MeshLambertMaterial is material for the objects. One way to know is to create rays from light to each material and see if the light is intersecting any other object, but my assumption is since shadows are being created it will be redundant to do ray tracing. Also, one needs to create multiple rays originating from each point of the object to the light and thus, have to reduce the answer to discrete simplification.
Related
I want to render a room with a floor + roof that is open to one side. The room contains a point light and the "outside" it lit by an ambient light (the sun). There is one additional requirement: The user should be able to look inside the room to see whats going on. But I cannot simply remove the roof because then the room is fully lit by the ambient light.
I think my problem could be solved by having 3d objects that are transparent by still are blocking the light.
To give you an idea about my current scene, this is how it looks like:
The grey thing is the wall of my room. The black thing is the floor of the room. The green thing is the ground of the scene. The room contains a point light.
I am currently using two scenes (see Exclude Area from Directional/Ambient Lighting) because I wanted the inside of the room to be unaffected by the ambient light. But now my lights can only affect either the inside of my room (the point light) OR the outside (the ambient light) but not both.
A runnable sample of my scene can be found here:
https://codesandbox.io/s/confident-worker-64kg7m?file=/src/index.js
Again: I think that my problem could be solved by having transparent objects that still block the light. If I had that I would simply have a 3d plane on top of my room (as the roof) and make it transparent... It would block the light that is inside of the (but still let it go outside if the room is open) and it would also block the ambient light (partially - if the room is open)...
Maybe there is also another solution that I am not seeing.
Just use one scene instead of two, then enable shadows across the relevant meshes so a light doesn't cross from inside to outside. Once you're using only one scene, the steps to take in your demo are:
Disable AmbientLight, and use DirectionalLight only, since AmbientLight illuminates everything indiscriminately, and that's not what you want.
Place the directional light above your structure, so it shines from the top-down.
Enable shadow-casting on the walls
Add a ceiling mesh with the material's side set to side: THREE.BackSide. This will only render the back side of the Mesh, which means it won't be visible from above, but it will still cast shadows.
const roomCeilMat = new MeshStandardMaterial({
side: BackSide
});
const roomCeiling = new Mesh(roomFloorGeo, roomCeilMat);
roomCeiling.position.set(0, 0, 1);
roomCeiling.castShadow = true;
scene1.add(roomCeiling);
See here for a working copy of your demo:
https://codesandbox.io/s/stupefied-williams-qd7jmi?file=/src/index.js
I would assign a flat, emissive material to the room. Or a depth gradient if it becomes terrain. Since ambient light doesn't cast shadow. It saves a light and extra geometry or groups. Plus web model viewer(s) would probably render it better. If you're doing a reveal transition, use a clip plane or texture alpha mask.
It depends on the presentation versus the output format. Also it depends on the complexity of the final floorplan. If your process is simple it will run Sims Lite on a Raspberry Voxel.
I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!
Is it possible to have a custom geometry emit light in Three.js?
There is a similar question from 5 years ago here.
In my particular case, I have created a TorusGeometry. I would like this torus to also give off light. Is that possible?
The only true way to do this is raytracing, in which case your torus becomes an "emitter" of photons and its geometry is used to calculate the initial directions of said photons.
Otherwise, light technically doesn't exist. Only (mathematical) descriptions of lights exist. (Remember, lights aren't visible/aren't rendered unless you're using a LightHelper.) These descriptions are used by material shaders, which use the light descriptions (and other objects in the scene, in the case of shadows) to determine the color the current fragment should contribute to a pixel.
With this in mind, if you could write a shader to handle a torus-shaped light, then all you need to do is provide that light's information to the shader. You can do this by extending a THREE.js light class to make your own TorusLight to add to the scene, then give the objects in your scene your custom shader.
THAT SAID, if you'd be satisfied with simulating the torus light, and want a visible torus, you can always add a PointLight at the position of your torus (or several throughout the body of the torus), and give your torus some kind of glow effect.
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface