Porting frame buffer Shadertoy to three.js - three.js

I want to port this shader: https://www.shadertoy.com/view/ldcGDX, to three.js. I'm new with fragment shaders but I'm using this thread as a semi-guide for how to port this type of frame buffer shader: https://discourse.threejs.org/t/basi...sl-shaders/409
Sadly, there seem to be lots of variables that needs to be converted in ways that are unknown for me. And the resources for this type of porting is sparse to say the least. So i'm asking all the shader masters out there for some kind of guidance of how to port this code/variables to make the port successful. Or if someone have good information to direct me to!
Thank you!

Related

Weird jitter of objects in Three.js using Mali GPU

I have a strange problem which has been bugging me for quite a while now, the issue is best explained by a short video:
As you can see the objects in the scene have a jitter when you move the camera around but also a similar thing happens every now and then when the camera is not moving. It's been driving me crazy for a while now. This video has been taken on a Tinkerboard with TinkerOS, but the same issue is also there on a Tinkerboard with FlintOS.
On a regular laptop there is no issue and everything is moving smoothly. I'm not sure if this is a bug or if it is expected behaviour seeing the differences in hardware, so I was hoping somebody could shed some light on this.
Here is a WebGL report from the Tinkerboard:
Here a WebGL report from my laptop:
Obviously there are differences but I have no idea if any of these difference would explain this behaviour.
Can anyone clarify?
Thanks!
The most likely issue is precision; most mobile GPUs map mediump variables in shaders to FP16 data types, most desktop GPUs map them FP32 data types.
What are your shaders here? Try using "highp" everywhere you compute positions.

WebGL/THREE.js Reading floating point textures

I was trying to figure out how to read the data from a floating point texture using three.js and I came across this post: https://github.com/mrdoob/three.js/issues/9513
I am able to successfully read from the floating point textures on firefox, however in chrome it throws the same error:
THREE.WebGLRenderer.readRenderTargetPixels: renderTarget is not in UnsignedByteType or implementation defined type.
I am not sure what I need to do to be able to read the textures, any help would be greatly appreciated.

WebGL Custom Shader Fluid on Image

I am currently trying to dive into the topic of WebGL shaders with THREE.js. I would appreciate if someone could give me some starting points for the following scenario:
I would like to create a fluid-like material, which either interacts with the users mouse or «flows» on it's on.
a little like this
http://cake23.de/turing-fluid.html
I would like to pass a background image to it, which serves as a starting point in terms of which colors are shown in the «liquid sauce» and where they are at the beginning. so to say: I define the initial image which is then transformed by a self initiated liquid flowing and also by the users interaction.
How I would proceed, with my limited knowledge:
I create a plane with the wanted image as a texture.
On top (between the image and the camera) I create a new mesh (plane too?) and this mesh has some custom vertex and fragment shaders applied.
Those shaders should somehow take the color from behind (from the image) and then move those vertices around following some physical rules...
I realize that the given example above has unminified code, but still it is so much, that I can't really break it down to simpler terms, which I fully understand. So I would really appreciate if someone could give me some simpler concepts which serve as a starting point for me.
more pages addressing things like this:
http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
https://29a.ch/sandbox/2012/fluidwebgl/
https://haxiomic.github.io/GPU-Fluid-Experiments/html5/
Well, anyway thanks for every link or reference, for every basic concept or anything you'd like to share.
Cheers
Edit:
Getting a similar result (visually) like this image would be great:
I'm trying to accomplish a similar thing. I am being surfing the web a lot. Looking for any hint I can use. so far, my conclusions are:
Try to support yourself using three.js
The magic are really in the shaders, mostly in the fragments shaders it could be a good thing start understanding how to write them and how they work. This link is a good start. shader tutorial
understand the dynamic (natural/real)behavior of fluid could be valuable. (equations)
maybe, this can help you a bit too. Raindrop simulation
If you have found something more around that, let me know.
I found this shaders already created. Maybe, any of them can help you without forcing you to learn a plenty of stuff. splash shaders
good luck

is it possible to read data from vertex shader?

I am trying to write a simple GPGPU benchmark. To load the data into vertex buffer array, do some computation in the vertex shader, and read the data back. Is it possible? I am planning to run this on SGX GPUs.
Is there any way to do that? I dont want it to go through the transformation, clipping, Tiling phases, and pixel processing. that incurs additional overhead, and changes my data.
can I read back the data and examine it in the CPU? is there anyway in opengl es?
I can do the computations in pixel shader aswell, by sending data through a texture and multiplying with some constants and writing it to another frame buffer. but how do I get it back? i dont really want to present it to the screen.
is there anyway to do that? can somebody point me to some tutorials if available?
Reading Vertex output: Look for Transform Feedback - But you will have to have OpenGL ES 3.0 to use this.
For ES2.0 I suggest using fragment shader and Render To Texture techniques. Here is some link with tutorial
After rendering to texture you basically have to read pixels of the texture.
Nice tutorial on reading the data here
tutorial about feedback: http://open.gl/feedback
You cannot read data from vertex shader directly in OpenGL ES 2.0.
So, you can sed your data to pixel/fragment shader, attach it to Frame Buffer Object(FBO) and then use glReadPixels to get the data as texture in your CPU.
This link discribes the concept and code snnipet:
here.
Hope this might help.
"...computation in the vertex shader, and read the data back. Is it possible? I am planning to run this on SGX GPUs."
As SGX supports Opengles 2.0 not 3.0, PowerVR SGX doesn't support reading vertex shader output.(OpenGL ES 2.0 spec doesn't include such functionality).
"can I read back the data and examine it in the CPU? is there anyway in opengl es?"
You can use framebuffer objects and read the same using glRead API. You can read about FrameBuffer Objects here
Ref: http://en.wikipedia.org/wiki/Framebuffer_Object
Ref: Reading the pixels values from the Frame Buffer Object (FBO) using Pixel Buffer Object (PBO)
If GPGPU calculation you are after, then i recommend you should go for OpenCL.
SGX Supports OpenCL EP 1.1.
While writing from vertex shader to offscreen is not possible in OpenGL ES2, you can do that from pixel shader.
First you need to render to the offscreen buffer, then you can either render that as a texture to another object on screen, or you can read it back using readPixels the usual way. A simple list of steps is given in https://gist.github.com/prabindh/9201855.

GLSL PointSprite for particle system

I'm using an ParticleSystem with PointSprites (inspired by the Cocos2D Source). But I wonder how to rebuild the functionality for OpenGL ES 2.0
glEnable(GL_POINT_SPRITE_OES);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glPointSizePointerOES(GL_FLOAT,sizeof(PointSprite),(GLvoid*) (sizeof(GL_FLOAT)*2));
glDisableClientState(GL_POINT_SIZE_ARRAY_OES);
glDisable(GL_POINT_SPRITE_OES);
these generate BAD_ACCESS when using an OpenGL ES 2.0 context.
Should I simply go with 2 TRIANGLES per PointSprite? But thats probably not very efficent (overhead for extra vertexes).
EDIT:
So, my new problem with the suggested solution from:
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
is a possibility to pass many different sizes in an batch call. I thought of using an Attribute instead of an Uniform, but then I would need to pass always an PointSize to my shaders - even if I'm not drawing GL_POINTS. So, maybe a second shader (a shader only for GL_POINTS)?! I'm not aware of the overhead for switching shaders every frame in the draw routine (because if the particle system is used, I want naturally also render regular GL_TRIANGLES without an pointSize)... Any ideas on this?
So doing the thing here as I already commented here is what you need: https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size/15528#15528
And for which approach to go, I can either tell you to use different shaders for different types of drawables in your application or just another boolean uniform in your shader and enable and disable changing the gl_PointSize through your shader code. It's usually up to you. What you need to keep in mind is changing the shader program is one of the most time costly operations so doing the drawing of same type of objects in a batch will be better in that case. I'm not really sure if using an if statement in your shader code will give a huge performance impact.

Resources