I'm working on a vector field over perlin noise and I was suggested to boost it up using shaders. My graphics knowledge is still very basic but I would like to ask if my thinking how to do it is correct.
Here is what I have. (it is not the latest version with 3rd dimension, but You will get the concept I guess).
So I will pass attribute: time, and noise value to the vertex shader. Unfortunatelly Im using noise function from some library which requires positions that should be calculated every frame in the shader. Is it possible to output from the shader a variable with position calculated inside for every particle?
I've found also something like "https://github.com/ashima/webgl-noise/wiki" for generating the noise inside the shader, but how to update the particles x,y,z position after moving it by the field value and keep it for the next frame? GLSL shaders should also have built in functions for noise generation but i don't think You can use them with threejs?
Thank You for any advice in advance!
have a look in this example... http://threejs.org/examples/#webgl_terrain_dynamic
it will give you some idea for noise creation with shaders and dynamic position of it....
Related
Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.
I tried the Instancedbuffergeometry, it works awesome,
Intersection is not happening in InstancedBufferGeometry, i checked in the threejs(r85) library, checkBufferGeometryIntersection function have the position value only, I think the offset and orientation value need to use with the position.
I have another doubt in it, i have used one rawshadermaterial only, then how i can highlight the selected geometry.
Can anyone guide in it.
Thanks in advance.
As far as the cpu is concerned (where you do the raycasting) those instances do not exist. You do however have your master geometry available. What you can do is, create another instance of BufferGeometry then create the same number of Mesh objects using that one instance of geometry. Use the same logic for instancing to place this into a scene. You don't render them, thus saving the overhead from multiple draw calls. You do have them available for intersection though as if it were normal geometry, because it is (you're just not rendering it).
As #pailhead already wrote, raycasting with instanced-geometries cannot work.
An alternative approach to achieve the same goal is to use so-called GPU picking. For this you render the scene into a framebuffer, using a special shader that will just output a unique color-value for every instance.
You can then sample the point under the cursor from that framebuffer and compute the instance-id from the color-value.
You can see an example for this technique here or here.
I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander
I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.
I'm using OpenGL ES + GLKit. I've never been this low-level before in my life so I still have to learn a lot of things. I've developed a Unity games before and you just give it a .obj file and corresponding texture and it's done. (UV mapping happens to be inside the .obj file?)
I want to develop a kind of special Toon Shader with some different characteristics for use with 3D model. So I need to write a vertex shader (.vsh) and fragment shader (.fsh) right?
However, I just know that in order to apply a texture to a model with correct UV coordinate, you have to do this in shader? (am I right?) With "Texture Shader".
So, If I want to both apply the texture with UV mapping then apply my special Toon Shader, I have to write both in the same shader? There is no way I can create a plug-and-play Toon shader so I can use it with anything?
As a side question, which file format is a UV coordinate and how can I take that in to a shader program? What kind of attribute variable?
So I need to write a vertex shader (.vsh) and fragment shader (.fsh)
right?
Yes.
However, I just know that in order to apply a texture to a model with
correct UV coordinate
True
There is no way I can create a plug-and-play Toon shader so I can use
it with anything?
Check Uber-Shaders
and how can I take that in to a shader program? What kind of attribute
variable?
You are defining your attributes in shader by yourself. Check this GLSL tutorial