Aspect Fit and Aspect Fill content mode with OpenGL ES 2.0 - opengl-es

I need to add two new content modes to display my textures with OpenGL ES 2.0 : "Aspect Fit" and "Aspect Fill'.
Here's an image explaining the different content modes :
I already have the "Scale to fill" content mode, which is the default behavior I guess.
Here's my basic vertex shader code for textures :
precision highp float;
attribute vec2 aTexCoordinate;
varying vec2 vTexCoordinate;
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
vTexCoordinate = aTexCoordinate;
gl_Position = uMVPMatrix * vPosition;
}
And here's my fragment shader for textures :
precision mediump float;
uniform vec4 vColor;
uniform sampler2D uTexture;
varying vec2 vTexCoordinate;
void main() {
// premultiplied alpha
vec4 texColor = texture2D(uTexture, vTexCoordinate);
// Scale the texture RGB by the vertex color
texColor.rgb *= vColor.rgb;
// Scale the texture RGBA by the vertex alpha to reinstate premultiplication
gl_FragColor = texColor * vColor.a;
//gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
For the "Aspect Fill" mode, I thought I could play with the texture coordinates to crop the image. But for the "Aspect Fit" mode, I don't have a clear idea on how I could do it in the shaders, and even better with a red background color like in the screenshot.

You have to scale the texture coordinates. I recommend to add a uniform for scaling the texture:
uniform vec2 textureScale;
void main()
{
vTexCoordinate = textureScale * (aTexCoordinate - 0.5) + 0.5;
// [...]
}
Compute the aspect ratio of the texture:
aspect = textrueHeight / textureWidth
The the textureScale has to be set dependent on the mode:
"Scale To Fill": vec2(1.0, 1.0).
landscape "Aspect Fit": vec2(1.0, aspect).
landscape "Aspect Fill": vec2(1.0/aspect, 1.0).
portrait "Aspect Fit": vec2(1.0/aspect, 1.0).
portrait "Aspect Fill": vec2(1.0, aspect).
You have to set the texture wrap parameters (GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T) mode to GL_CLAMP_TO_EDGE. See glTexParameter.
Alternatively you can discard superfluous fragments in the fragment shader:
void main() {
if (vTexCoordinate.x < 0.0 || vTexCoordinate.x > 1.0 ||
vTexCoordinate.y < 0.0 || vTexCoordinate.y > 1.0) {
discard;
}
// [...]
}

Thanks to #Rabbid76 and his answer here , I managed to do it after adding this part to his answer :
float textureAspect = (float)textureSize.width / (float)textureSize.height;
float frameAspect = (float)frameSize.width / (float)frameSize.height;
float scaleX = 1, scaleY = 1;
float textureFrameRatio = textureAspect / frameAspect;
BOOL portraitTexture = textureAspect < 1;
BOOL portraitFrame = frameAspect < 1;
if(contentMode == AspectFill) {
if(portraitFrame)
scaleX = 1.f / textureFrameRatio;
else
scaleY = textureFrameRatio;
}
else if(contentMode == AspectFit) {
if(portraitFrame)
scaleY = textureFrameRatio;
else
scaleX = 1.f / textureFrameRatio;
}
Then I do vec2(scaleX, scaleY)

A metal fragment shader, For making a texture aspect fill/fit within an expected size.
fragment float4 fragment_aspect_fitfill(
VertexOut vertexIn [[stage_in]],
texture2d<float, access::sample> sourceTexture [[texture(0)]],
sampler sourceSampler [[sampler(0)]],
constant float2 &size [[ buffer(0) ]],
constant float &contentMode [[ buffer(1) ]])
{
float2 uv = vertexIn.textureCoordinate;
//Calculate Aspect Ration for both Texture and Expected output texture
float textureAspect = (float)sourceTexture.get_width() / (float)sourceTexture.get_height();
float frameAspect = (float)size.x / (float)size.y;
float scaleX = 1, scaleY = 1;
float textureFrameRatio = textureAspect / frameAspect;
bool portraitTexture = textureAspect < 1;
bool portraitFrame = frameAspect < 1;
// Content mode 0 is for aspect Fill, 1 is for Aspect Fit
if(contentMode == 0.0) {
if(portraitFrame)
scaleX = 1.f / textureFrameRatio;
else
scaleY = textureFrameRatio;
} else if(contentMode == 1.0) {
if(portraitFrame)
scaleY = textureFrameRatio;
else
scaleX = 1.f / textureFrameRatio;
}
float2 textureScale = float2(scaleX, scaleY);
float2 vTexCoordinate = textureScale * (uv - 0.5) + 0.5;
return sourceTexture.sample(sourceSampler, vTexCoordinate);
}
** Note: Use MetalPetal for better understanding some struct like VertexOut etc.
** This fragment shader algorithm may not be perfect, but this works and worked for me.

Related

How can I implement the Thin Plate Spline algorithm in a GLSL vertex shader?

I would like to make a GLSL shader for warping an image/texture using the TPS algorithm. How would I write the GLSL vertex shader for that?
The answer was that I needed to make a vertex shader, not a fragment shader.
What I needed was to actually warp an image with GLSL and it seems this article describes how to do it: https://testdrive-archive.azurewebsites.net/Graphics/Warp/Default.html
attribute vec2 aPosition;
varying vec2 vTexCoord;
#define MAXPOINTS 9
uniform vec2 p1[MAXPOINTS]; // where the reference points
uniform vec2 p2[MAXPOINTS]; // where the warp points
void main() {
vTexCoord = aPosition;
vec2 position = aPosition * 2.0 - 1.0; // convert 0 - 1 range to -1 to +1 range
for (int i = 0; i < MAXPOINTS; i++)
{
float dragdistance = distance(p1[i], p2[i]);
float mydistance = distance(p1[i], position);
if (mydistance < dragdistance)
{
vec2 maxdistort = (p2[i] - p1[i]) / 4.0;
float normalizeddistance = mydistance / dragdistance;
float normalizedimpact = (cos(normalizeddistance*3.14159265359)+1.0)/2.0;
position += (maxdistort * normalizedimpact);
}
}
//gl_Position = vec4(aPosition * 2.0 - 1.0, 0.0, 1.0);
gl_Position = vec4(position, 0.0, 1.0);
}

Drawing a circle in fragment shader

I am a complete noob when it comes to creating shaders. Or better said, I just learned about it yesterday.
I am trying to create a really simple circle. I thouht I finally figured it out but it turns out to be to large. It should match the DisplayObject size where the filter is applied to.
The fragment shader:
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.0),1.0);
}
Example using Pixi.js:
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
Pixi.js's vTextureCoord do not go from 0 to 1.
From the docs
V4 filters differ from V3. You can't just add in the shader and assume that texture coordinates are in the [0,1] range.
...
Note: vTextureCoord multiplied by filterArea.xy is the real size of bounding box.
If you want to get the pixel coordinates, use uniform filterArea, it will be passed to the filter automatically.
uniform vec4 filterArea;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
They are in pixels. That won't work if we want something like "fill the ellipse into a bounding box". So, lets pass dimensions too! PIXI doesnt do it automatically, we need a manual fix:
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
Lets combine it in shader!
uniform vec4 filterArea;
uniform vec2 dimensions;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 normalizedCoord = pixelCoord / dimensions;
Here's your snippet updated.
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
void main() {
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 uv = pixelCoord / dimensions;
uv -= 0.5;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
It seems you've stumbled upon weird floating point precision problems: texture coordinates (vTextureCoord) in your fragment shader aren't strictly in (0, 1) range. Here's what I've got when I've added line gl_FragColor = vec4(vTextureCoord, 0, 1):
It seems good, but if we inspect it closely, lower right pixel should be (1, 1, 0), but it isn't:
The problem goes away if instead of setting size to 500 by 500 we use power-of-two size (say, 512 by 512), the problem goes away:
The other possible way to mitigate the problem would be to try to circumvent Pixi's code that computes projection matrix and provide your own that transforms smaller quad into desired screen position.

Weird behavior if DataTextures are not square (1:1)

I have a pair of shader programs where everything works great if my DataTextures are square (1:1), but if one or both are 2:1 (width:height) ratio the behavior gets messed up. I can extend each of the buffers with unused filler to make sure they are always square, but this seems unnecessarily costly (memory-wise) in the long run, as one of the two buffer sizes is quite large to start. Is there a way to handle a 2:1 buffer in this scenario?
I have a pair of shader programs:
The first is a single frag shader used to calculate the physics for my program (it writes out a texture tPositions to be read by the second set of shaders). It is driven by Three.js's GPUComputeRenderer script (resolution set at the size of my largest buffer.)
The second pair of shaders (vert and frag) use the data texture tPositions produced by the first shader program to then render out the visualization (resolution set at the window size).
The visualization is a grid of variously shaped particle clouds. In the shader programs, there are textures of two different sizes: The smaller sized textures contain information for each of the particle clouds (one texel per cloud), larger sized textures contain information for each particle in all of the clouds (one texel per particle). Both have a certain amount of unused filler tacked on the end to fill them out to a power of 2.
Texel-per-particle sized textures (large): tPositions, tOffsets
Texel-per-cloud sized textures (small): tGridPositionsAndSeeds, tSelectionFactors
As I said before, the problem is that when these two buffer sizes (the large and the small) are at a 1:1 (width: height) ratio, the programs work just fine; however, when one or both are at a 2:1 (width:height) ratio the behavior is a mess. What accounts for this, and how can I address it? Thanks in advance!
UPDATE: Could the problem be related to my housing the texel coords to read the tPosition texture in the shader's position attribute in the second shader program? If so, perhaps this Github issue regarding texel coords in the position attribute may be related, though I can't find a corresponding question/answer here on SO.
UPDATE 2:
I'm also looking into whether this could be an unpack alignment issue. Thoughts?
Here's the set up in Three.js for the first shader program:
function initComputeRenderer() {
textureData = MotifGrid.getBufferData();
gpuCompute = new GPUComputationRenderer( textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, renderer );
dtPositions = gpuCompute.createTexture();
dtPositions.image.data = textureData.tPositions;
offsetsTexture = new THREE.DataTexture( textureData.tOffsets, textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.needsUpdate = true;
gridPositionsAndSeedsTexture = new THREE.DataTexture( textureData.tGridPositionsAndSeeds, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
gridPositionsAndSeedsTexture.needsUpdate = true;
selectionFactorsTexture = new THREE.DataTexture( textureData.tSelectionFactors, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
positionVariable = gpuCompute.addVariable( "tPositions", document.getElementById( 'position_fragment_shader' ).textContent, dtPositions );
positionVariable.wrapS = THREE.RepeatWrapping; // repeat wrapping for use only with bit powers: 8x8, 16x16, etc.
positionVariable.wrapT = THREE.RepeatWrapping;
gpuCompute.setVariableDependencies( positionVariable, [ positionVariable ] );
positionUniforms = positionVariable.material.uniforms;
positionUniforms.tOffsets = { type: "t", value: offsetsTexture };
positionUniforms.tGridPositionsAndSeeds = { type: "t", value: gridPositionsAndSeedsTexture };
positionUniforms.tSelectionFactors = { type: "t", value: selectionFactorsTexture };
positionUniforms.uPerMotifBufferWidth = { type : "f", value : textureData.uPerMotifBufferWidth };
positionUniforms.uPerMotifBufferHeight = { type : "f", value : textureData.uPerMotifBufferHeight };
positionUniforms.uTime = { type: "f", value: 0.0 };
positionUniforms.uXOffW = { type: "f", value: 0.5 };
}
Here is the first shader program (only a frag for physics calculations):
// tPositions is handled by the GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform float uTime;
uniform float uXOffW;
[...skipping a noise function for brevity...]
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) {
float damping = 0.98;
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferWidth)*texelSizeX, floor(cellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) {
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1
//nowPos = vec4( -(uTime) + gridPosition.x + offsets.x, gridPosition.y + offsets.y, 0.0, 0.0 );
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
[...skipping the physics for brevity...]
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
Here is the setup for the second shader program:
Note: The renderer for this section is a WebGLRenderer at window size
function makePerParticleReferencePositions() {
var positions = new Float32Array( perParticleBufferSize * 3 );
var texelSizeX = 1 / perParticleBufferDimensions.width;
var texelSizeY = 1 / perParticleBufferDimensions.height;
for ( var j = 0, j3 = 0; j < perParticleBufferSize; j ++, j3 += 3 ) {
positions[ j3 + 0 ] = ( ( j % perParticleBufferDimensions.width ) / perParticleBufferDimensions.width ) + ( 0.5 * texelSizeX );
positions[ j3 + 1 ] = ( Math.floor( j / perParticleBufferDimensions.height ) / perParticleBufferDimensions.height ) + ( 0.5 * texelSizeY );
positions[ j3 + 2 ] = j * 0.0001; // this is the real z value for the particle display
}
return positions;
}
var positions = makePerParticleReferencePositions();
...
// Add attributes to the BufferGeometry:
gridOfMotifs.geometry.addAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );
gridOfMotifs.geometry.addAttribute( 'aTextureIndex', new THREE.BufferAttribute( motifGridAttributes.aTextureIndex, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aAlpha', new THREE.BufferAttribute( motifGridAttributes.aAlpha, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aCellIndex', new THREE.BufferAttribute(
motifGridAttributes.aCellIndex, 1 ) );
uniformValues = {};
uniformValues.tSelectionFactors = motifGridAttributes.tSelectionFactors;
uniformValues.uPerMotifBufferWidth = motifGridAttributes.uPerMotifBufferWidth;
uniformValues.uPerMotifBufferHeight = motifGridAttributes.uPerMotifBufferHeight;
gridOfMotifs.geometry.computeBoundingSphere();
...
function makeCustomUniforms( uniformValues ) {
selectionFactorsTexture = new THREE.DataTexture( uniformValues.tSelectionFactors, uniformValues.uPerMotifBufferWidth, uniformValues.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
var customUniforms = {
tPositions : { type : "t", value : null },
tSelectionFactors : { type : "t", value : selectionFactorsTexture },
uPerMotifBufferWidth : { type : "f", value : uniformValues.uPerMotifBufferWidth },
uPerMotifBufferHeight : { type : "f", value : uniformValues.uPerMotifBufferHeight },
uTextureSheet : { type : "t", value : texture }, // this is a sprite sheet of all 10 strokes
uPointSize : { type : "f", value : 18.0 }, // the radius of a point in WebGL units, e.g. 30.0
// Coords for the hatch textures:
uTextureCoordSizeX : { type : "f", value : 1.0 / numTexturesInSheet },
uTextureCoordSizeY : { type : "f", value : 1.0 }, // the size of a texture in the texture map ( they're square, thus only one value )
};
return customUniforms;
}
And here is the corresponding shader program (vert & frag):
Vertex shader:
uniform sampler2D tPositions;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform sampler2D uTextureSheet;
uniform float uPointSize; // the radius size of the point in WebGL units, e.g. "30.0"
uniform float uTextureCoordSizeX; // vertical dimension of each texture given the full side = 1
uniform float uTextureCoordSizeY; // horizontal dimension of each texture given the full side = 1
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
[...skipping noise function for brevity...]
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (aCellIndex >= 0.0) { // buffer filler cell indexes are -1
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX, floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vDensity = vel.x + vel.y;
vAlpha *= abs( vDensity * 3.0 );
vColor = vec3( 1.0, aColor, 1.0 ); // set RGB color associated to vertex; use later in fragment shader.
gl_PointSize = uPointSize;
} else { // if this is a filler cell index (-1)
vAlpha = 0.0;
vDensity = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 ); // position holds the real z value. The z value of "color" is a component of velocity
}
Fragment shader:
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // only render out the texture if this point is not a buffer filler
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Expected Behavior: I can achieve this by forcing all the DataTextures to be 1:1
Weird Behavior: When the smaller DataTextures are 2:1 those perfectly horizontal clouds in the top right of the picture below form and have messed up physics. When the larger DataTextures are 2:1, the grid is skewed, and the clouds appear to be missing parts (as seen below). When both the small and large textures are 2:1, both odd behaviors happen (this is the case in the image below).
Thanks to an answer to my related question here, I now know what was going wrong. The problem was in the way I was using the arrays of indexes (1,2,3,4,5...) to access the DataTextures' texels in the shader.
In this function (and the one for the larger DataTextures)...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
...I assumed that in order to create the y value for my custom uv, perMotifUV, I would need to divide the aCellIndex by the height of the buffer, uPerMotifBufferHeight (it's "vertical" dimension). However, as explained in the SO Q&A here the indices should, of course, be divided by the buffer's width, which would then tell you how many rows down you are!
Thus, the function should be revised to...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferWidth)*texelSizeY ); **Note the change to uPerMotifBufferWidth here
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
The reason my program worked on square DataTextures (1:1) is that in such cases the height and width were equal, so my function was effectively dividing by width in the incorrect line because height=width!

Radial reveal of image in OpenGL shader

I'm playing with a shader concept to radially reveal an image using a shader in OpenGL ES. The end goal is to create a circular progress bar by discarding fragments in a fragment shader that renders a full circular progress texture.
I have coded my idea here in ShaderToy so you can play with it. I can't seem to get it to work, and since there's no way to debug I'm having a hard time figuring out why.
Here's my glsl code for the fragment shader:
float magnitude(vec2 vec)
{
return sqrt((vec.x * vec.x) + (vec.y * vec.y));
}
float angleBetween(vec2 v1, vec2 v2)
{
return acos(dot(v1, v2) / (magnitude(v1) * magnitude(v2)));
}
float getTargetAngle()
{
return clamp(iGlobalTime, 0.0, 360.0);
}
// OpenGL uses upper left as origin by default
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
vec2 up = vec2(centerX, 0.0) - center;
vec2 v2 = fragCoord - center;
float angleBetween = angleBetween(up, v2);
return (angleBetween >= 0.0) && (angleBetween <= targetAngle);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
if (shouldDrawFragment(fragCoord)) {
fragColor = texture2D(iChannel0, vec2(uv.x, -uv.y));
} else {
fragColor = texture2D(iChannel1, vec2(uv.x, -uv.y));
}
}
It sweeps out revealing from the bottom on both sides. I just want it to sweep out from a vector pointing straight up, and moving in a clockwise motion.
Try this code:
const float PI = 3.1415926;
const float TWO_PI = 6.2831852;
float magnitude(vec2 vec)
{
return sqrt((vec.x * vec.x) + (vec.y * vec.y));
}
float angleBetween(vec2 v1, vec2 v2)
{
return atan( v1.x - v2.x, v1.y - v2.y ) + PI;
}
float getTargetAngle()
{
return clamp( iGlobalTime, 0.0, TWO_PI );
}
// OpenGL uses upper left as origin by default
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
float a = angleBetween(center, fragCoord );
return a <= targetAngle;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
if (shouldDrawFragment(fragCoord)) {
fragColor = texture2D(iChannel0, vec2(uv.x, -uv.y));
} else {
fragColor = texture2D(iChannel1, vec2(uv.x, -uv.y));
}
}
Explanation:
The main change I made was the way the angle between two vectors is calculated:
return atan( v1.x - v2.x, v1.y - v2.y ) + PI;
This is the angle of the difference vector between v1 and v2. If you swap the x and y values it will change the direction of where the 0 angle is, i.e. if you try this:
return atan( v1.y - v2.y, v1.x - v2.x ) + PI;
the circle begins from the right rather than upwards. You can also invert the value of atan to change the direction of the animation.
You also don't need to worry about the up vector when calculating the angle between, notice the code just takes the angle between the center and the current frag co-ordinates:
float a = angleBetween(center, fragCoord );
Other Notes:
Remember calculations are in radians, not degrees so I changed the clamp on time (although this doesn't really affect the output):
return clamp( iGlobalTime, 0.0, TWO_PI );
You have a variable with the same name as one of your functions:
float angleBetween = angleBetween(up, v2);
which should be avoided since not all implementations are happy with this, I couldn't compile your shader on my current machine until I changed this.
Change only two functions below
float getTargetAngle()
{
return clamp(iGlobalTime, 0.0, 6.14);
}
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
vec2 up = vec2(centerX, 0.0) - center;
vec2 v2 = fragCoord - center;
if(fragCoord.x>320.0)// a half width
{
up += 2.0*vec2(up.x,-up.y);
targetAngle *= 2.;
}
else
{
up -= 2.0*vec2(up.x,-up.y);
targetAngle -= 1.57;
}
float angleBetween = angleBetween(up, v2);
return (angleBetween >= 0.0) && (angleBetween <= targetAngle);
}

Blur frame buffer content

Can I blur not a texture but the frame buffer? The following shader blurs a texture:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform vec2 resolution;
uniform float blurRadius;
uniform float sampleNum;
vec3 blur(vec2);
void main(void)
{
vec3 col = blur(v_texCoord);
gl_FragColor = vec4(col, 1.0) * v_fragmentColor;
}
vec3 blur(vec2 p)
{
if (blurRadius > 0.0 && sampleNum > 1.0)
{
vec3 col = vec3(0);
vec2 unit = 1.0 / resolution.xy;
float r = blurRadius;
float sampleStep = r / sampleNum;
float count = 0.0;
for(float x = -r; x < r; x += sampleStep)
{
for(float y = -r; y < r; y += sampleStep)
{
float weight = (r - abs(x)) * (r - abs(y));
col += texture2D(CC_Texture0, p + vec2(x * unit.x, y * unit.y)).rgb * weight;
count += weight;
}
}
return col / count;
}
return texture2D(CC_Texture0, p).rgb;
}
How can I blur not a texture pixels, but Frame Buffer pixels that is already drawn?
You can apply that blur shader to texture that you've attached to the framebuffer as drawing to the framebuffer writes the pixel data to the selected colour attachment.
This page provides info on one way that it can be done by drawing the framebuffers colour attachment to the screen with a shader. It's in opengl 3.3 though but all the functions used should exist in opengl es 2.0.

Resources