What is the difference between EffectComposer & NodePostProcessing in Threejs? - three.js

It appears that both of these are supposed to chain post processing effects during rendering. But I don't understand how they differ or why you would use one over another.
The Threejs docs has a sparse post processing page but it never mentions anything about nodes despite several post processing examples using them.
Ultimately what I am interested in doing is similar to this example except I'd also like to add sharpening as well. But I can find zero documentation about the "Nodes" that are being used.

The node material in three.js is experimental and thus not documented. If you want to do post processing with three.js, it's best to stick to EffectComposer and its respective examples. Almost all documentation, tutorials or post processing demos you find online are based on EffectComposer.

Related

Creating frosted glass three webgl

I'm having trouble to find how to create a material with the look of frosted glass. I haven't found anything on the web that looks what I want to do.
I've tried a lot of settings for the material.
In this link you can see what I'm trying to get..
Does anybody have an idea how to solve this?
Regards
Rikard
One way I've encountered that worked well for me in the past performed a Blit on the portion of the framebuffer you want frosted with the blur algo or normal pattern of your choice. A stencil mask as part of the glass shader is used to determine which portion should be affected and which should not.
This article has a nice writeup on glass refraction which, when used with a blur will give a good effect.
https://beclamide.medium.com/advanced-realtime-glass-refraction-simulation-with-webgl-71bdce7ab825
I know It's not WebGL per se, but I've used the below Unity frosted glass shader before, to great effect. You may be able to extract the pertinent pieces from it and use that knowledge to assemble a WebGL version. https://github.com/andydbc/unity-frosted-glass
I'm about to undertake this myself, and will update this answer with actual code 'if' I succeed.

WebGL Custom Shader Fluid on Image

I am currently trying to dive into the topic of WebGL shaders with THREE.js. I would appreciate if someone could give me some starting points for the following scenario:
I would like to create a fluid-like material, which either interacts with the users mouse or «flows» on it's on.
a little like this
http://cake23.de/turing-fluid.html
I would like to pass a background image to it, which serves as a starting point in terms of which colors are shown in the «liquid sauce» and where they are at the beginning. so to say: I define the initial image which is then transformed by a self initiated liquid flowing and also by the users interaction.
How I would proceed, with my limited knowledge:
I create a plane with the wanted image as a texture.
On top (between the image and the camera) I create a new mesh (plane too?) and this mesh has some custom vertex and fragment shaders applied.
Those shaders should somehow take the color from behind (from the image) and then move those vertices around following some physical rules...
I realize that the given example above has unminified code, but still it is so much, that I can't really break it down to simpler terms, which I fully understand. So I would really appreciate if someone could give me some simpler concepts which serve as a starting point for me.
more pages addressing things like this:
http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
https://29a.ch/sandbox/2012/fluidwebgl/
https://haxiomic.github.io/GPU-Fluid-Experiments/html5/
Well, anyway thanks for every link or reference, for every basic concept or anything you'd like to share.
Cheers
Edit:
Getting a similar result (visually) like this image would be great:
I'm trying to accomplish a similar thing. I am being surfing the web a lot. Looking for any hint I can use. so far, my conclusions are:
Try to support yourself using three.js
The magic are really in the shaders, mostly in the fragments shaders it could be a good thing start understanding how to write them and how they work. This link is a good start. shader tutorial
understand the dynamic (natural/real)behavior of fluid could be valuable. (equations)
maybe, this can help you a bit too. Raindrop simulation
If you have found something more around that, let me know.
I found this shaders already created. Maybe, any of them can help you without forcing you to learn a plenty of stuff. splash shaders
good luck

Simple morphing animation between two images

I'm looking to implement a simple morphing animation between two images.
Here's a simple demo of what I'm trying to create: http://i.imgur.com/7377yHr.gif
I'm pretty comfortable with Objective-C and JavaScript but since the concepts and algorithms are abstract, I'm more than willing to see examples in any language or framework.
I would like to know how hard it would be to tackle this -- it doesn't have to be exact but as long as it gives the impression of a morph I'll be satisfied.
Where would I start?
It seems like in your example is being used a combination of mesh wrap morphing and cross dissolve morphing. Mesh morphing can be tricky and as far as I know it requires a manual input (defining the mesh), so depending on what you want to do it might not be suitable for you.
If you are looking for a cheap technique (in terms of effort), probably just doing cross dissolve would work for you, since is very easy to implement. You just need to combine both images by increasing the alpha of the target image and decreasing the alpha on the origin image.
These articles give an overview of the techniques:
[PDF] http://css1a0.engr.ccny.cuny.edu/~wolberg/pub/vc98.pdf
[PDF] http://www.sorging.ro/en/member/serveFile/format/pdf/slug/image-morphing-techniques
[PDF] http://cs.haifa.ac.il/hagit/courses/ip/Lectures/Ip05_GeomOper.pdf
The last link comes from a comment in a similar question: Morphing, 3 algorithms, image processing

ThreeJS: is it possible to simplify an object / reduce the number of vertexes?

I'm starting to learn ThreeJS. I have some very complex models to display.
These models come from Autocad files that my customer provides.
But sometimes the amount of details in the model is just way too much for the purpose of the website.
I would like to reduce the amount of vertexes in the model to simplify the display and enhance performance.
Is this possible from within ThreeJS? Or is there maybe an other solution for this?
There's a modifier called SimplifyModifier that works very well. You'll find it in the Three.js examples
https://threejs.org/examples/#webgl_modifier_simplifier
If you can import the model into Blender, you could try Decimate Modifier. In the latest version of Blender, it features three different methods with configurable "amount" parameters. Depending on your mesh topology, it might reduce the poly count drastically with virtually no visual changes, or it might completely break the model with even a slight reduction attempt. Other 3d packages should include a similar functionality, but I've never used those.
.
Another thing that came into mind: Sometimes when I've encountered a too high-poly Blender model, a good start has been checking if it has a Subdivision Modifier applied and removing that if so. While I do not know if there's something similar in Autocad, it might be worth investigating.
I updated SimplifyModifier function. it works with Textured models. Here is example:
https://rigmodels.com/3d_LOD.php?view=5OOOEWDBCBR630L4R438TGJCD
you can extract JS codes and use in your project.

Image movement calibration

I have a series of mostly identical images taken over a period of time. However, the objects in the images drifts over time, and I would like to correct for this. What is a good was to do this?
[EDIT] Okay, I may have to explain why I'm going this. I've taken some series of X-ray images of objects at different X-ray energies. I now want to compare the object are the various energies, but since it drifts I have to correct for the drift first. The object has no sharps edges or anything which otherwise would be easy to use for alignment. Therefore I'm looking for a more general method
In its general form this problem is known as image registration, and is a large topic of research in the image processing community. There are a varity of different methods and algorithms, often specialized for image modality. Depending on your images, to do this could be easy, or it could be difficult. I would recommend using one of the registration methods found in the file-exchange.
Based on your description of your images, it seems a rigid transformation should be enough. In that case, this method should work nicely.

Resources