Prevent DataTexture value normalization in THREE - three.js

In THREE you can specify a DataTexture with a given data type and format. My shader pipeline normalizes the raw values based on a few user-controlled uniforms.
In the case of a Float32Array, it is very simple:
data = new Float32Array(...)
texture = new THREE.DataTexture(data, columns, rows, THREE.LuminanceFormat, THREE.FloatType)
And, in the shader, the swizzled values have non-normalized values. However, if I use:
data = new Uint8Array(...)
texture = new THREE.DataTexture(data, columns, rows, THREE.LuminanceFormat, THREE.UnsignedByteType);
Then the texture is normalized between 0.0 and 1.0 as an input to the pipeline. Not what I was expecting. Is there a way to prevent this behavior?
Here is an example jsfiddle demonstrating a quick test of what is unexpected (at least for me): http://jsfiddle.net/VsWb9/3796/
three.js r.71

For future reference, this is not currently possible in WebGL. It requires the use of GL_RED_INTEGER and the unsupported usampler2d.
This comment from the internalformat of Texture also describes the issue in GL for the internal formats.
For that matter, the format of GL_LUMINANCE says that you're passing
either floating-point data or normalized integer data (the type says
that it's normalized integer data). Of course, since there's no
GL_LUMINANCE_INTEGER (which is how you say that you're passing integer
data, to be used with integer internal formats), you can't really use
luminance data like this.
Use GL_RED_INTEGER for the format and GL_R8UI for the internal format
if you really want 8-bit unsigned integers in your texture. Note that
integer texture support requires OpenGL 3.x-class hardware.
That being said, you cannot use sampler2D with an integer texture. If
you are using a texture that uses an unsigned integer texture format,
you must use usampler2D.

Related

Can a hyperspectral image be stored in cv::mat?

I know cv::mat container can store 3 channel images, but the data pointer of the container also could store multidimensional matrices. I was wondering if the different bands can be stored in the multidimensional matrix and keep the color channel for said bands (even though they'd be false colors beyond the visual range)
OpenCV Mat objects can be N-dimensional. As the docs for cv::Mat show, there are multiple constructors that specify the dimensions.
Furthermore, 2d matrices can have many more than three channels. The channels are encoded in the "type" of the matrix, so there exists a macro to create a type for many numbers of channels (up to CV_CN_MAX = 512) for the standard matrix datatypes, for e.g. uint8 (CV_U8C(n)) and fp64 (CV_64FC(n)). I believe the macros exist for each datatype, but you can check specifically on the same doc pages for all the macros defined up at the top.

Rendering to a texture with blending and large component sizes and OpenGLES3.1

I'm trying to perform a render-to-texture operation that is supposed to accumulate arithmetic calculations in a texture. The output texture format should have at least the following capabilities:
Must have at least 2 components: One for the calculation result, one for alpha.
The calculation result has a value range of 0-65536.
Must be able to perform additive blending on these values using at least the alpha value from the fragment shader (blend function will be GL_ONE, GL_SRC_ALPHA).
Usually, I render to a texture using FBOs. However, according to
https://www.khronos.org/registry/OpenGL-Refpages/es3.1/html/glTexImage2D.xhtml
texture formats are either not color-renderable (so no FBOs) or non-blendable (because integer) or have small component sizes (usually 8-bit).
Is there a texture format that suits my needs? Or is there a non-FBO solution?
Regards

How to pass a variable number of MTLTextures to a fragment shader?

What is the correct syntax for passing a variable number of MTLTexture's as an array to a fragment shader?
This StackOverflow Question: "How to use texture2d_array array in metal?" mentions the use of:
array<texture2d<half>, 5>
However, this requires specifying the size of the array. In Metal Shading Language Specification.pdf (Sec. 2.11) they also demonstrate this type. However, they also refer to array_ref but it's not clear to me how to use it, or if it's even allowed as a parameter type for a fragment shared given this statement:
"The array_ref type cannot be passed as an argument to graphics
and kernel functions."
What I'm currently doing is just declaring the parameter as:
fragment float4 fragmentShader(RasterizerData in [[ stage_in ]],
sampler s [[ sampler(0) ]],
const array<texture2d<half>, 128> textures [[ texture(0) ]]) {
const half4 c = textures[in.textureIndex].sample(s, in.coords);
}
Since the limit is 128 fragment textures. In any render pass, I might use between 1..n textures, where I ensure that n does not exceed 128. That seems to work for me, but am I Doing It Wrong?
My use-case is drawing a 2D plane that is sub-divided into a bunch of tiles. Each tile's content is sampled from a designated texture in the array based on a pre-computed texture index. The textures are set using setFragmentTexture:atIndex in the correct order at the start of the render pass. The texture index is passed from the vertex shader to the fragment shader.
You should consider an array texture instead of a texture array. That is, a texture whose type is MTLTextureType2DArray. You use the arrayLength property of the texture descriptor to specify how many 2-D slices the array texture contains.
To populate the texture, you specify which slice you're writing to with methods such as -replaceRegion:... or -copyFrom{Buffer,Texture}:...toTexture:....
In a shader, you can specify which element to sample or read from using the array parameter.

Floating Point Textures in OpenGL ES 2.0

I've been trying to figure out how to use float textures in GLES2. The API Reference says that only unsigned bytes and shorts can be used, but I've seen people saying it is supported elsewhere.
I could use GL_LUMINANCE as the texture format but that only gets me one float value.
In OpenGL ES 2.0, floating-point textures are only supported if the implementation exports the OES_texture_float extension. Note that this extension only allows nearest filtering within a texture level, and no filtering between texture levels. This restriction is loosened by the presence of OES_texture_float_linear. Another potential caveat is that the presence of OES_texture_float does not require that the implementation support rendering to floating-point textures with framebuffer objects.
What are you trying to do with float textures?

how does the d3dx library save textures to files?

When using the function: D3DXSaveTextureToFile and passing in D3DXIFF_BMP to create a bmp I've noticed that the values seem to be estimated rather than given specifically.
Correct me if I'm wrong but a floating point texture can store any float in any given texel which would put it outside the range of a BMP which is stuck between rgb(255,255,255,255), so what it seems that the function is doing is simply taking the upper most and lowermost value of the texture and normalizing it between that range.
So my question is: Is it possible to grab the values exactly as they are in memory? including when the colours are outside of the spectruc of the computer monitor?
Don't use BMP. Use a format that supports the data type you want. For DX textures, it seems the D3DXIFF_PFM format is what you need. It's described like so:
Portable float map format. A raw
floating point image format, without
any compression. The file header
specifies image width, height,
monochrome or color, and machine word
order. Pixel data is stored as 32-bit
floating point values, with 3 values
per pixel for color, and one value per
pixel for monochrome.
Note that images will be large, though. A 256x256 texture in this format should weigh in at around 768 KB.
Updates: You should be able to use Image Magick's display command to view images in this format. Also HDRView supports the PFM format. A third choice might be fv.

Resources