Texturing by mask - three.js

I have tiles for landscape quadtree.
Each tile I texturizing by mask
vec4 frag = vec4 (0.0);
for (int i = 0; i <texture_length; i ++)
frag + = texture2D (texture [i], vUv * 6.0) * texture2D (mask [i], vUv);
where texture - an array of textures of grass, asphalt, black soil, etc. size of 256x256 px and "repeat = 6"
mask - array of masks. Size 512x512 px.
If I use one texture and one mask, I have 60 fps, but if the texture by mask at least 4 textures, it drops to 40 fps. In addition, when you add dynamically a tile felt millisecond freeze
Is it justified for 40 fps for this task, or you can achieve better performance?

Related

Fragment shader normal artifacts appearing on specific GPU

I am calculating normals from an RGB encoded height map:
float unpackFactor = vec3(256.0 * 255.0, 255.0, 255.0 / 256.0);
float unpackOffset = -32768.0;
Therefore I edited the phong shader built-in dHdxy_fwd() function:
vec2 dHdxy_fwd() {
float texelSize = 1.0 / 256.0;
vec2 dSTdx = vec2(texelSize, .0);
vec2 dSTdy = vec2(.0, texelSize);
float Hll = bumpScale * dot(texture2D(displacementMap, vUv).rgb, unpackFactors) + unpackOffset;
float dBx = bumpScale * dot(texture2D(displacementMap, vUv + dSTdx).rgb, unpackFactors) + unpackOffset - Hll;
float dBy = bumpScale * dot(texture2D(displacementMap, vUv + dSTdy).rgb, unpackFactors) + unpackOffset - Hll;
return vec2(dBx, dBy);
}
The decoding of the height, unfortunately, causes artifacts around the green channel of the texture – happening on iPhone 7, iPhone XS and the Radeon 455 dedicated GPU of my Mac:
Those artifacts look like this:
Zoomed in:
On the Intel HD Graphics 530 (integrated GPU), however, there are no such artifacts visible – it looks perfect just as it should (ignore the tile seams for):
Zoomed in:
Why are artifacts appearing on some (most of the tested) GPUs? Any idea how to get rid of them? Seems like some numerical instability, but I fumbled around with texture precision, compressing the total height value, etc. with no luck yet.

Retrieve Vertices Data in THREE.js

I'm creating a mesh with a custom shader. Within the vertex shader I'm modifying the original position of the geometry vertices. Then I need to access to this new vertices position from outside the shader, how can I accomplish this?
In lieu of transform feedback (which WebGL 1.0 does not support), you will have to use a passthrough fragment shader and floating-point texture (this requires loading the extension OES_texture_float). That is the only approach to generate a vertex buffer on the GPU in WebGL. WebGL does not support pixel buffer objects either, so reading the output data back is going to be very inefficient.
Nevertheless, here is how you can accomplish this:
This will be a rough overview focusing on OpenGL rather than anything Three.js specific.
First, encode your vertex array this way (add a 4th component for index):
Vec4 pos_idx : xyz = Vertex Position, w = Vertex Index (0.0 through NumVerts-1.0)
Storing the vertex index as the w component is necessary because OpenGL ES 2.0 (WebGL 1.0) does not support gl_VertexID.
Next, you need a 2D floating-point texture:
MaxTexSize = Query GL_MAX_TEXTURE_SIZE
Width = MaxTexSize;
Height = min (NumVerts / MaxTexSize, 1);
Create an RGBA floating-point texture with those dimensions and use it as FBO color attachment 0.
Vertex Shader:
#version 100
attribute vec4 pos_idx;
uniform int width; // Width of floating-point texture
uniform int height; // Height of floating-point texture
varying vec4 vtx_out;
void main (void)
{
float idx = pos_idx.w;
// Position this vertex so that it occupies a unique pixel
vec2 xy_idx = vec2 (float ((int (idx) % width)) / float (width),
floor (idx / float (width)) / float (height)) * vec2 (2.0) - vec2 (1.0);
gl_Position = vec4 (xy_idx, 0.0f, 1.0f);
//
// Do all of your per-vertex calculations here, and output to vtx_out.xyz
//
// Store the index in the W component
vtx_out.w = idx;
}
Passthrough Fragment Shader:
#version 100
varying vec4 vtx_out;
void main (void)
{
gl_FragData [0] = vtx_out;
}
Draw and Read Back:
// Draw your entire vertex array for processing (as `GL_POINTS`)
glDrawArrays (GL_POINTS, 0, NumVerts);
// Bind the FBO's color attachment 0 to `GL_TEXTURE_2D`
// Read the texture back and store its results in an array `verts`
glGetTexImage (GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, verts);

How to make a fragment shader replace white with alpha, opengl-es

I am trying to come up with a opengl-es fragment shader that will replace the white pixels with alpha. The image with the checkered background is what I want. The checkered background represents the image after alpha conversion. Any tips? Normally I'd hate asking this here but I can't find anything on it.
Getting the "white pixels" as in the image you posted seems to be getting a grayscale component. That is summing up RGB values dividing by 3. Then output RGB are all .0 in your case and the alpha equals to the grayscale pixel...
vec4 textureSample = texture2D(uniformTexture, textureCoordinate);
lowp float grayscaleComponent = textureSample.x*(1.0/3.0) + textureSample.y*(1.0/3.0) + textureSample.z*(1.0/3.0);
gl_FragColor = lowp vec4(.0, .0, .0, grayscaleComponent);
Properly speaking, grayscale value is 0.2126 * R + 0.7152 * G + 0.0722 * B
http://en.wikipedia.org/wiki/Grayscale

Repeat texture like stipple

I'm using orthographic projection.
I have 2 triangles creating one long quad.
On this quad i put a texture that repeat him self along the the way.
The world zoom is always changing by the user - and makes the quad length be short or long accordingly. The height is being calculated in the shader so it is always the same size (in pixels).
My problem is that i want the texture to repeat according to it's real (pixel size) and the length of the quad. In other words, that the texture will be always the same size (pixels) and it will fill the quad by repeating it more or less depend on the quad length.
The rotation is important.
For Example
My texture is
I've added to my vertices - texture coordinates for duplicating it 20 times now
as you see below
Because it's too much zoomed out we see the texture squeezed.
Now i'm zooming in and the texture stretched. It will always be 20 times repeat.
I'm sure that i have to play in with the texture coordinates in the frag shader, but don't see the solution. or perhaps there is a better solution to my problem.
---- ADDITION ----
Solved it by:
Calculating the repeat S value in the current zoom (That i'm adding the vertices) and send the map width (in world values) as attribute. Every draw i'm sending the current map width as uniform for calculating the scale.
But i'm not happy with this solution.
OK, found a way to do it with minimum attributes and minimum code in the shader.
Do Once:
Calculating the the repeat count for each line as my world and my screen are 1:1 - 1 in my world is 1 pixel. LineDistance(InWorldUnits)/picWidth(inScreenUnits)
Saving as an attribute.
Every Draw:
Calculating the scale - world to screen - WorldWidth/ScreenWidth
Setting as uniform
Drawing the buffer
In the frag shader
simply multiply this scale with the repeat attribute.
Works perfectly and looks good. Resizing the window is supported as well.
The general solution is to include a texture matrix. So your vertex shader might look something like
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
uniform mat4 u_matrix;
uniform mat4 u_texMatrix;
void main() {
gl_Position = u_matrix * a_position;
v_texcoord = (u_texMatrix * v_texcoord).xy;
}
Now you can set up texture matrix to scale your texture coordinates however you need. If your texture coordinates go from 0 to 1 across the texture and your pattern is 16 pixels wide then if you're drawing a line 100 pixels long you'd need 100/16 as your X scale.
var pixelsLong = 100;
var pixelsTall = 8;
var textureWidth = 16;
var textureHeight = 16;
var xScale = pixelsLong / textureWidth;
var yScale = pixelsTall / textureHeight;
var texMatrix = [
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1,
];
gl.uniformMatrix4fv(texMatrixLocation, false, texMatrix);
That seems like it would work. Because you're using a matrix you can also easily offset or rotate the texture. See matrix math

GLSL: simulating 3D texture with 2D texture

I came up with some code that simulates 3D texture lookup using a big 2D texture that contains the tiles. 3D Texture is 128x128x64 and the big 2D texture is 1024x1024, divided into 64 tiles of 128x128.
The lookup code in the fragment shader looks like this:
#extension GL_EXT_gpu_shader4 : enable
varying float LightIntensity;
varying vec3 pos;
uniform sampler2D noisef;
vec4 flat_texture3D()
{
vec3 p = pos;
vec2 inimg = p.xy;
int d = int(p.z*128.0);
float ix = (d % 8);
float iy = (d / 8);
vec2 oc = inimg + vec2(ix, iy);
oc *= 0.125;
return texture2D(noisef, oc);
}
void main (void)
{
vec4 noisevec = flat_texture3D();
gl_FragColor = noisevec;
}
The tiling logic seems to work ok and there is only one problem with this code. It looks like this:
There are strange 1 to 2 pixel wide streaks between the layers of voxels.
The streaks appear just at the border when d changes.
I've been working on this for 2 days now and still without any idea of what's going on here.
This looks like a texture filter issue. Think about it: when you come close to the border, the bilinear filter will consider the neighboring texel, in your case: from another "depth layer".
To avoid this, you can clamp the texture coords so that they are never outside the rect defined outmost texel centers of the tile (similiar to GL_CLAMP_TO_EDGE, but on a per-tile basis). But you should be aware that the problems will become worse when using mipmapping. You should also be aware, that currently you are not able to filter in the z direction, as a real 3D texture would. You could simulate this manually in the shader, of course.
But really: why not just using 3D textures? The hw can do all this for you, with much less overhead...

Resources