WebGL/THREE.js Reading floating point textures - three.js

I was trying to figure out how to read the data from a floating point texture using three.js and I came across this post: https://github.com/mrdoob/three.js/issues/9513
I am able to successfully read from the floating point textures on firefox, however in chrome it throws the same error:
THREE.WebGLRenderer.readRenderTargetPixels: renderTarget is not in UnsignedByteType or implementation defined type.
I am not sure what I need to do to be able to read the textures, any help would be greatly appreciated.

Related

WebGL2 - Write to texture/buffer at arbitrary position - OpenGL imageStore equivalent

I already know how to write to a texture using Framebuffers.
However, for a project I'm doing that requires FFT's, I need at a point to write, in the same shader, into more than one position of the texture/buffer at once.
I have already done this project in OpenGL before, in which I used an imageTexture and "ImageStore" to achieve this effect.
How can I go about achieving this in WebGL?
I can't find anything useful online, since all the things I can find only read from textures or only write to a single point
It seems there is no way in WebGL to write to scattered points on a texture.

Weird jitter of objects in Three.js using Mali GPU

I have a strange problem which has been bugging me for quite a while now, the issue is best explained by a short video:
As you can see the objects in the scene have a jitter when you move the camera around but also a similar thing happens every now and then when the camera is not moving. It's been driving me crazy for a while now. This video has been taken on a Tinkerboard with TinkerOS, but the same issue is also there on a Tinkerboard with FlintOS.
On a regular laptop there is no issue and everything is moving smoothly. I'm not sure if this is a bug or if it is expected behaviour seeing the differences in hardware, so I was hoping somebody could shed some light on this.
Here is a WebGL report from the Tinkerboard:
Here a WebGL report from my laptop:
Obviously there are differences but I have no idea if any of these difference would explain this behaviour.
Can anyone clarify?
Thanks!
The most likely issue is precision; most mobile GPUs map mediump variables in shaders to FP16 data types, most desktop GPUs map them FP32 data types.
What are your shaders here? Try using "highp" everywhere you compute positions.

Three.js: Shader won't compile with gl.MAX_TEXTURE_IMAGE_UNITS textures

I'm using Three.js to build a scene in which I want to pack the maximum number of quads into each draw call. On my machine, the limiting factor is the number of textures I can display in each draw call.
What confuses me is that gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS) returns 16, but if I try to pass exactly 16 textures into my THREE.RawShaderMaterial, I get the following error:
THREE.WebGLProgram: shader error: 0 gl.VALIDATE_STATUS false
gl.getProgramInfoLog ERROR: Implementation limit of 16 active fragment
shader samplers (e.g., maximum number of supported image units)
exceeded, fragment shader uses 17 samplers
If I pass exactly 15 textures in, the scene renders fine (though a texture is missing of course).
My question is: Does Three.js add an additional texture to each draw call somewhere? If not, does anyone know what might account for this off by one problem? Any help others can offer on this question would be greatly appreciated.
My question is: Does Three.js add an additional texture to each draw call somewhere? If not, does anyone know what might account for this off by one problem?
Yes, most of the materials inject various samplers related to various maps. It can be something as straightforward as an "albedo map" but it could also be a shadow map for example. Use RawShaderMaterial if you don't want three to inject stuff.
Well that's embarrassing. It turns out I was passing 16 textures to the shader, but I was trying to access texture2D(textures[16]) and it was this attempt to read from a sampler index > max textures that threw this error.
It's interesting that passing a material array longer than gl.MAX_TEXTURE_IMAGE_UNITS does not throw an error -- it's trying to access index value > gl.MAX_TEXTURE_IMAGE_UNITS-1 that throws an error.

WebGL Custom Shader Fluid on Image

I am currently trying to dive into the topic of WebGL shaders with THREE.js. I would appreciate if someone could give me some starting points for the following scenario:
I would like to create a fluid-like material, which either interacts with the users mouse or «flows» on it's on.
a little like this
http://cake23.de/turing-fluid.html
I would like to pass a background image to it, which serves as a starting point in terms of which colors are shown in the «liquid sauce» and where they are at the beginning. so to say: I define the initial image which is then transformed by a self initiated liquid flowing and also by the users interaction.
How I would proceed, with my limited knowledge:
I create a plane with the wanted image as a texture.
On top (between the image and the camera) I create a new mesh (plane too?) and this mesh has some custom vertex and fragment shaders applied.
Those shaders should somehow take the color from behind (from the image) and then move those vertices around following some physical rules...
I realize that the given example above has unminified code, but still it is so much, that I can't really break it down to simpler terms, which I fully understand. So I would really appreciate if someone could give me some simpler concepts which serve as a starting point for me.
more pages addressing things like this:
http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
https://29a.ch/sandbox/2012/fluidwebgl/
https://haxiomic.github.io/GPU-Fluid-Experiments/html5/
Well, anyway thanks for every link or reference, for every basic concept or anything you'd like to share.
Cheers
Edit:
Getting a similar result (visually) like this image would be great:
I'm trying to accomplish a similar thing. I am being surfing the web a lot. Looking for any hint I can use. so far, my conclusions are:
Try to support yourself using three.js
The magic are really in the shaders, mostly in the fragments shaders it could be a good thing start understanding how to write them and how they work. This link is a good start. shader tutorial
understand the dynamic (natural/real)behavior of fluid could be valuable. (equations)
maybe, this can help you a bit too. Raindrop simulation
If you have found something more around that, let me know.
I found this shaders already created. Maybe, any of them can help you without forcing you to learn a plenty of stuff. splash shaders
good luck

Accessing Specific Pixels (Corresponding to Facial Points) with Kinect 1.5

I've been researching the Kinect API and programming with the new SDK (1.5) for a few weeks now, and I'm basically trying to find where the eyes are in each image streamed from the Kinect sensor. I then want to get the RGB values for the pixels that make up the eyes. Though some variables, like pFaceModel2DPoint and pPts2D (both in Visualize.cpp), claim to store the x,y values for all 86 points that make up the face in the colorImage (of type IFTImage*), I have tested and re-tested these variables but cannot access worthwhile data from these variables.
Furthermore, even if these x,y values corresponding to the eyes were correct for the given image, I cannot find out how to access the RGB values for each pixel desired. I know the macro (FTIMAGEFORMAT_UINT8_B8G8R8A8) to find the format in which the pixel data is stored, and I know that byte* pixels = colorImage->GetBuffer() will give the buffer stream for the current image streaming from the Kinect, but doing something as simple as pixels[rowNum*num_cols_per_row + colNum] = [...] inside of a for loop does not yield anything useful.
I've been really discouraged and disappointed that I cannot get this working, but I have searched through so many sites and search engines for any resolution to a problem close to mine and have found nothing. I wrote my own code several times using OpenCV and the Kinect, just the Kinect itself, and modifications of the MultiFace sample from the SDK. (These variables and functions listed above are from the MultiFace sample.) Any help would be extremely appreciated. Thank you!
Update: The Kinect API was unclear, hence my asking the question, but I have solved this problem after some trial and error. The image format is actually RGBX (formatted as BGRX), so elements 0-3 in the byte* correspond to pixel 0, elements 4-7 correspond to pixel 1, etc. Pretty simple; I had just gotten confused between the different methods of handling the image stream because there are a few GetBuffer-type calls in the same header file. Hopefully this will help someone else!

Resources