Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.
Related
I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!
I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH
I have several images of a rotating object. Each image shows a different angle. Now I want to let the user rotate the object with his or her fingers. This works but there aren't enough frames to show a smooth rotation. It's too jumpy.
I want to make it smoother and by that I probably need to generate more "steps", generate images of different missing angles. Is there an existing algorithm or technique I could use?
I think that if you try to interpolate pixel values temporaly between two consecutive images it would result in poor results but it might worth the try.
A more interesting approach would be to make a 3d estimation of your object using stereoscopic technics and then to project a synthetic view of the estimated scene at an intermediary position. For this to work, you will need to now the precise angle of the object at each frame. Occlusion is also an issue with stereoscopy.