In my three.js project I have two objects. One is a golden ring and the other one is a diamond. Now I would like to "cut out" a piece of the ring and place a frame for the diamond in the hole.
I created an alpha map for testing and applied it to the ring material. I also positioned the diamond to be above the point where the ring is now transparent.
Everything seems to work fine, except that I still can't see the diamond "inside" the ring. After looking at this and this post I have set renderOrder to 1 on the diamond but this does not help.
Well in the end it turned out to be a very stupid mistake. I forgot to set transparent: true on the material.
Related
I'm trying to make a very simple shadow shader which consists of a plane with a shader showing a radial gradient on colors and alpha.
Beneath this shadow lies another plane with the same kind of shader but linear.
And as a background of all this, a linear gradient from dark blue to light blue.
The problem is that when my camera approaches the ground, the plane of the shadow masks the floor.
Why does it happen and what can I do to prevent that?
https://codesandbox.io/s/epic-sun-po9j3
https://po9j3.csb.app/
You'd need to post code to check for sure but it likely happens because three.js sorts the order it draws things based on the center of the objects and their distance from the camera.
You can force a different order by setting Object3D.renderOrder
three.js also generally draws opaque things before transparent things so my guess is your ground plane and your shadow plane are both set to transparent: true but the ground can be set to transparent: false in which case it will be drawn first.
You might find this article useful. It shows a similar example.
As for why there is a hole it's because of the depth buffer. If something in front gets drawn first then the pixels behind are not drawn. So if the shadow happens to be drawn first it ends up looking like a hole because the pixels of plane behind it are not drawn.
See this
I'm currently rendering a skybox to a THREE.CubeCamera target and am then using that target as the environment map on a material. The idea being that I want to have the colour of a cube affected by the colour of the sky around it, though not fully reflecting it (like how a white matte cube would look in the real world).
For example, here is what I have so far applying the environment map to a THREE.LambertMaterial or THREE.PhongMaterial with reflectivity set to 0.7 (same results):
Notice in the first image that the horizon line is clearly visible (this is at sunset when it's most obvious) and that the material is very reflective. The second image shows the same visible horizon line, which moves with the camera as you orbit. The third image shows the box at midday with blue sky above it (notice how the blue is reflected very strongly).
The effect I'm trying to aim for is a duller, perhaps blurred representation of what we can already see working here. I want the sky to affect the cube but I don't want to fully reflect it, instead I want each side of the cube to have a much more subtle effect without a visible horizon line.
I've experimented with the reflection property of the materials without much luck. Yes, it reduces the reflection effect but it also removes most of the colouring taken from the skybox. I've also tried the shininess property of THREE.PhongMaterial but that didn't seem to do much, if anything.
I understand that environment maps are meant to be reflections, however my hope is that there is a way to achieve what I'm after. I want a reflection of the sky, I just need it to be much less precise and instead more blurred / matte.
What could I do to achieve this?
I achieve this writing my own custom shader based on physically based rendering shading model.
I use cook-torrance model that consider roughness of the material for specular contribution. It's not an easy argument that I can talk in this answer, you can find great references here http://graphicrants.blogspot.it/ at the specular BRDF article.
In this question you can find how I achieve the blurry reflection depending on material roughness.
Hope it can help.
I solved this by passing a different set of textures that were blurred to be the cubemap for the object.
I'm writing a little 3D engine. I've just added the alpha blending functionality in my program and I wonder one thing: do I have to sort all the primitives compared with the camera?)
Let's take a simple example : I have a scene composed by 1 skybox and 1 tree with alpha blended leafs!
Here's a screenshot of a such scene:
Until here all seems to be correct concerning the alpha blending of the leafs relative to each others.
But if we get closer...
... we can see there is a little trouble on the top right of the image (the area around the leaf forms a quad).
I think this bug comes from the fact these two quads (primitives) should have been rendered later than the ones in back.
What do you think about my supposition ?
PS: I want to precise all the geometry concerning the leafs is rendered in just one draw call.
But if I'm right it would means when I need to render an alpha blended mesh like this tree I need update my VBO each time my camera is moving by sorting all the primitives (triangles or quads) from the camera's point of view. So the primitives in back should be rendered in first...
What do you think of my idea?
I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal
Every technique that I've found or tried to render outline in OpenGL uses some function that is not avaliable on OpenGL ES...
Actually what I could do is set depthMask to false, draw the object as a 3 pixels wide line wireframe, reenable the depthMask and then drawing my object. It doesnt work for me because it outline only the external parts of my object, not the internals.
The following image shows two outlines, the left one is a correct outline, the right one is what I got.
So, can someone direct me to a technique that doesn't is avaliable on OpenGL ES?
Haven't done one of these for a while, but I think you're almost there! What I would recommend is this:
Keep depthMask enabled, but flip your backface culling to only render the "inside" of the object.
Draw the mesh with that shader that pushes all the verts out along their normals slightly and as a solid color (your outline color, probably black). Make sure that you're drawing solid triangles and not just GL_LINES.
Flip the backface culling back to normal again and re-render the mesh like usual.
The result is that the outlines will only be visible around the points on your mesh where the triangles start to turn away from the camera. This gives you some nice, simple outlines around things like noses, chins, lips, and other internal details.