Related
I'm having a problem with a GLSL shader that interpolates color in 3D space, and assigns it based on the 3D coordinates of the bounding box and I can't seem to fix it:
The stamen in this codepen: https://codepen.io/ricky1280/pen/BaxyaZY
this is the code that I feel like probably has the problem, the geometry of the sphere:
const stamenEndCap = new THREE.SphereGeometry( sinCurveScale/120, 20, 20 );
// stamenEndCap.scale(1,1.5,1)
stamenEndCap.scale(4,1,1) //find a way to rotate geometry relative to the sin curve at the end
stamenEndCap.toNonIndexed();
stamenEndCap.computeBoundingSphere();
stamenEndCap.computeBoundingBox();
stamenEndCap.normalizeNormals();
stamenEndCap.computeTangents();
console.log(stamenEndCap.attributes.position.array)
for (var i=0; i<stamenEndCap.attributes.position.array.length; i=i+3){
stamenEndCap.attributes.position.array[i]=stamenEndCap.attributes.position.array[i]+((centerEnd.x)) //offset
stamenEndCap.attributes.position.array[i+1]=stamenEndCap.attributes.position.array[i+1]+((centerEnd.y))
stamenEndCap.attributes.position.array[i+2]=stamenEndCap.attributes.position.array[i+2]+((centerEnd.z)) //height?
}
stamenEndCap.computeVertexNormals();
// let positionVector = new THREE.Vector3(spherePoint.x,spherePoint.y,spherePoint.z)
// console.log(positionVector)
stamenEndCap.attributes.position.needsUpdate = true;
console.log(stamenEndCap.attributes.position.array)
let merge = THREE.BufferGeometryUtils.mergeBufferGeometries([geometry2,stamenEndCap])
merge.attributes.position.needsUpdate = true;
It is shaded improperly, it looks like this:
The color harshly changes from white to that light blue color on the vertical axis, even though the stamen end cap (line 364 of the codepen) is merged with the tube geometry and the shader is calculated across the 3D space of the entire merged object. The geometry becomes "merge" on line 394, and then "stamenGeom" on line 400. Then its boundingbox is used in the vertex and fragment shaders that exist on lines 422-552.
I'm not sure how to shade this properly so that it transitions smoothly, without the line denoting the change in color from white-blue. It doesn't seem to respond to normals, unfortunately.
Viewing the stamen from plan (top-down?) shows that the color is transitioning properly, but viewed from the side it appears as the image.
If anyone has any advice or solutions please let me know, and thank you for reading all of this.
figured it out: in the shader code the colors weren't being blended properly.
previous fragment shader code:
`vec4 diffuseColor = vec4( diffuse, opacity );`,
`
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
vec4 blue = vec4(0.0, 0.0, 1.0, 1.0);
vec4 green = vec4(0.0, 1.0, 0.0, 1.0);
float f = clamp((vPos.z - bbMin.z) / (bbMax.z - bbMin.z)+vertOffset, 0., 1.);
// + is slider for vertical color position, -1 to 1
float linear_modifier = (1.00 * abs(1.) * f);
//vertical gradient position!!
//moves from 0-10?
vec3 col = mix(color1, color2, linear_modifier);
//float f2 = clamp((vPos.x - bbMin.x) / (bbMax.x - bbMin.x), 0., 1.);
float f2 = clamp(vUv.x, 0., 1.);
vec2 pos_ndc = vPos.xy*centerSize2;
float dist = length(pos_ndc*centerSize);
//controls central gradient position!
//the lower the larger?
//0-20
// float linear_modifier2 = (1.00 * abs(sin(1.0)) * dist);
//col = mix(color3, col, dist);
//NOT USING DIST REMOVES VERTICAL CENTRAL GRADIENT
// vec4 diffuseColor = vec4( col, opacity );
float f3 = clamp(vUv.x+f3Offset, 0., 1.);
// ^ THIS controls brightness of lowlights. lower the more intense.
col = mix(color3, col, f3);
//not using this removes LOWLIGHTS
//f3 is subtle fade
//col = mix(color3, col, f3);
//col = mix(color3, col, f2);
//f2 is default
vec4 diffuseColor = vec4( col, opacity );`
fixed shader code:
`vec4 diffuseColor = vec4( diffuse, opacity );`,
`
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
vec4 blue = vec4(0.0, 0.0, 1.0, 1.0);
vec4 green = vec4(0.0, 1.0, 0.0, 1.0);
float f = clamp((vPos.z - bbMin.z) / (bbMax.z - bbMin.z)+vertOffset, 0., 1.);
// + is slider for vertical color position, -1 to 1
float linear_modifier = (1.00 * abs(1.) * f);
//vertical gradient position!!
//moves from 0-10?
vec3 col = mix(color1, color2, linear_modifier);
float f2 = clamp((vPos.x - bbMin.x) / (bbMax.x - bbMin.x), 0., 1.);
//float f2 = clamp(vUv.x, 0., 1.);
vec2 pos_ndc = vPos.xy*centerSize2;
float dist = length(pos_ndc*centerSize);
//controls central gradient position!
//the lower the larger?
//0-20
// float linear_modifier2 = (1.00 * abs(sin(1.0)) * dist);
//col = mix(color3, col, dist);
//NOT USING DIST REMOVES VERTICAL CENTRAL GRADIENT
// vec4 diffuseColor = vec4( col, opacity );
float f3 = clamp(vUv.x+f3Offset, 0., 1.);
// ^ THIS controls brightness of lowlights. lower the more intense.
//col = mix(color3, col, f3);
//not using this removes LOWLIGHTS
//f3 is subtle fade
//col = mix(color3, col, f3);
//col = mix(color3, col, f2);
//f2 is default
vec4 diffuseColor = vec4( col, opacity );
`
I'm attempting to create a shader that additively blends colored "blobs" (kind of like particles) on top of one another. This seems like it should be a straightforward task but I'm getting strange "banding"-like artifacts when the blobs blend.
First off, here's the behavior I'm after (replicated using Photoshop layers):
Note that the three color layers are all set to blendmode "Linear Dodge (Add)" which as far as I understand is Photoshop's "additive" blend mode.
If I merge the color layers and leave the resulting layer set to "Normal" blending, I'm then free to change the background color as I please.
Obviously additive blending will not work on top of a non-black background, so in the end I will also want/need the shader to support this pre-merging of colors before finally blending into a background that could have any color. However, I'm content for now to only focus on getting the additive-on-top-of-black blending working correctly, because it's not.
Here's my shader code in its current state.
const int MAX_SHAPES = 10;
vec2 spread = vec2(0.3, 0.3);
vec2 offset = vec2(0.0, 0.0);
float shapeSize = 0.3;
const float s = 1.0;
float shapeColors[MAX_SHAPES * 3] = float[MAX_SHAPES * 3] (
s, 0.0, 0.0,
0.0, s, 0.0,
0.0, 0.0, s,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0
);
vec2 motionFunction (float i) {
float t = iTime;
return vec2(
(cos(t * 0.31 + i * 3.0) + cos(t * 0.11 + i * 14.0) + cos(t * 0.78 + i * 30.0) + cos(t * 0.55 + i * 10.0)) / 4.0,
(cos(t * 0.13 + i * 33.0) + cos(t * 0.66 + i * 38.0) + cos(t * 0.42 + i * 83.0) + cos(t * 0.9 + i * 29.0)) / 4.0
);
}
float blend (float src, float dst, float alpha) {
return alpha * src + (1.0 - alpha) * dst;
}
void mainImage (out vec4 fragColor, in vec2 fragCoord) {
float aspect = iResolution.x / iResolution.y;
float x = (fragCoord.x / iResolution.x) - 0.5;
float y = (fragCoord.y / iResolution.y) - 0.5;
vec2 pixel = vec2(x, y / aspect);
vec4 totalColor = vec4(0.0, 0.0, 0.0, 0.0);
for (int i = 0; i < MAX_SHAPES; i++) {
if (i >= 3) {
break;
}
vec2 shapeCenter = motionFunction(float(i));
shapeCenter *= spread;
shapeCenter += offset;
float dx = shapeCenter.x - pixel.x;
float dy = shapeCenter.y - pixel.y;
float d = sqrt(dx * dx + dy * dy);
float ratio = d / shapeSize;
float intensity = 1.0 - clamp(ratio, 0.0, 1.0);
totalColor.x = totalColor.x + shapeColors[i * 3 + 0] * intensity;
totalColor.y = totalColor.y + shapeColors[i * 3 + 1] * intensity;
totalColor.z = totalColor.z + shapeColors[i * 3 + 2] * intensity;
totalColor.w = totalColor.w + intensity;
}
float alpha = clamp(totalColor.w, 0.0, 1.0);
float background = 0.0;
fragColor = vec4(
blend(totalColor.x, background, alpha),
blend(totalColor.y, background, alpha),
blend(totalColor.z, background, alpha),
1.0
);
}
And here's a ShaderToy version where you can view it live — https://www.shadertoy.com/view/wlf3RM
Or as a video — https://streamable.com/un25t
The visual artifacts should be pretty obvious, but here's a video that points them out: https://streamable.com/kxaps
(I think they are way more prevalent in the video linked before this one, though. The motion really make them pop out.)
Also as a static image for comparison:
Basically, there are "edges" that appear on certain magical thresholds. I have no idea how they got there or how to get rid of them. Your help would be highly appreciated.
The inside lines are where totalColor.w reaches 1 and so alpha is clamped to 1 inside them. The outside ones that you've traced in white are the edges of the circles.
I modified your ShaderToy link by changing float alpha = clamp(totalColor.w, 0.0, 1.0); to float alpha = 1.0; and float intensity = 1.0 - clamp(ratio, 0.0, 1.0); to float intensity = smoothstep(1.0, 0.0, ratio); (to smooth out the edges of the circles) and now it looks like the first picture.
I'm trying to rotate an image in webgl. If the texture has the same width as height there is no problem, but if width is for example 256px and height only 32px the image gets skewed.
It seems as if only the texture is rotating and not the vertices. However usually when only the texture is rotating it's corners gets clipped as they move outside the vertices. That doesn't happen here so I'm a bit confused.
Here is my vertex shader code:
precision lowp float;
attribute vec3 vertPosition;
attribute vec3 vertColor;
attribute vec2 aTextureCoord;
varying vec3 fragColor;
varying lowp vec2 vTextureCoord;
varying lowp vec2 vTextureCoordBg;
uniform vec2 uvOffsetBg;
uniform vec2 uvScaleBg;
uniform mat4 uPMatrix;
uniform vec2 uvOffset;
uniform vec2 uvScale;
uniform vec3 translation;
uniform vec3 scale;
uniform float rotateZ;
uniform vec2 vertPosFixAfterRotate;
void main()
{
fragColor = vertColor;
vTextureCoord = (vec4(aTextureCoord.x, aTextureCoord.y, 0, 1)).xy * uvScale + uvOffset;
vTextureCoordBg = (vec4(aTextureCoord, 0, 1)).xy * uvScaleBg + uvOffsetBg;
mat4 worldPosTrans = mat4(
vec4(scale.x*cos(rotateZ), scale.y*-sin(rotateZ), 0, 0),
vec4(scale.x*sin(rotateZ), scale.y*cos(rotateZ), 0, 0),
vec4(0, 0, scale.z, 0),
vec4(translation.x, translation.y, translation.z, 1));
gl_Position = (uPMatrix * worldPosTrans) * vec4(vertPosition.x + vertPosFixAfterRotate.x, vertPosition.y + vertPosFixAfterRotate.y, vertPosition.z, 1.0);
}
The rotation is sent from javascript to the shader through the rotateZ uniform.
You have to do the scaling before the rotation:
Scale matrix:
mat4 sm = mat4(
vec4(scale.x, 0.0, 0.0, 0.0),
vec4(0.0, scale.y, 0.0, 0.0),
vec4(0.0, 0.0, scale.z, 0.0),
vec4(0.0, 0.0, 0.0, 1.0));
Rotation matrix:
mat4 rm = mat4(
vec4(cos(rotateZ), -sin(rotateZ), 0.0, 0.0),
vec4(sin(rotateZ), cos(rotateZ), 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0));
Translation matrix:
mat4 tm = mat4(
vec4(1.0, 0.0, 0.0, 0.0),
vec4(0.0, 1.0, 0.0, 0.0),
vec4(0.0, 0.0, 0.0, 0.0),
vec4(translation.x, translation.y, translation.z, 1.0));
Model transformtion:
mat4 worldPosTrans = tm * rm * sm;
See the result and focus on scale.x and scale.y, in compare to the code snippet in your question:
mat4 worldPosTrans = mat4(
vec4(scale.x * cos(rotateZ), scale.x * -sin(rotateZ), 0.0, 0.0),
vec4(scale.y * sin(rotateZ), scale.y * cos(rotateZ), 0.0, 0.0),
vec4(0.0, 0.0, scale.z, 0.0),
vec4(translation.x, translation.y, translation.z, 1.0));
Can we have vert shader without attributes?
#version 300 es
out mediump vec4 basecolor;
uniform ivec2 x1;
void main(void)
{
if(x1 == ivec2(10,20))
basecolor = vec4(0.0, 1.0, 0.0, 1.0);
else
basecolor = vec4(1.0, 0.0, 1.0, 1.0);
gl_PointSize = 64.0;
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}
#version 300 es
in mediump vec4 basecolor;
out vec4 FragColor;
void main(void)
{
FragColor = basecolor;
}
Technically there is nothing in the specification that actually requires you to have vertex attributes. But by the same token, in OpenGL ES 3.0 you have two intrinsically defined in attributes whether you want them or not:
The built-in vertex shader variables for communicating with fixed functionality are intrinsically declared as follows in the vertex language:
in highp int gl_VertexID;
in highp int gl_InstanceID;
This is really the only time it actually makes sense not to have any attributes. You can dynamically compute the position based on gl_VertexID, gl_InstanceID or some combination of both, which is a major change from OpenGL ES 2.0.
I want to write a shader that creates a reflection of an image similiar to the ones used for coverflows.
// Vertex Shader
uniform highp mat4 u_modelViewMatrix;
uniform highp mat4 u_projectionMatrix;
attribute highp vec4 a_position;
attribute lowp vec4 a_color;
attribute highp vec2 a_texcoord;
varying lowp vec4 v_color;
varying highp vec2 v_texCoord;
mat4 rot = mat4( -1.0, 0.0, 0.0, 0.0,
0.0, -1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0 );
void main()
{
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * a_position * rot;
v_color = a_color;
v_texCoord = a_texcoord;
}
// Fragment Shader
varying highp vec2 v_texCoord;
uniform sampler2D u_texture0;
uniform int slices;
void main()
{
lowp vec3 w = vec3(1.0,1.0,1.0);
lowp vec3 b = vec3(0.0,0.0,0.0);
lowp vec3 mix = mix(b, w, (v_texCoord.y-(float(slices)/10.0)));
gl_FragColor = texture2D(u_texture0,v_texCoord) * vec4(mix, 1.0);
}
But this shader is creating the following:
current result
And I dont know how to "flip" the image horizontally and I tried so many different parameters in the rotation matrix (I even tried to use a so called "mirror matrix") but I dont know how to reflect the image on the bottom of original image.
If you're talking about what images.google.com returns for "coverflow" result, then you don't need rotation matrix at all.
void main()
{
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * a_position;
v_color = a_color;
v_texCoord = vec2(a_texcoord.x, 1.0 - a_texcoord.y);
}
Simply flip it vertically.
If you insist on using matrix and want to make a "mirror" shader (the one that takes it object, and puts it under "floor" to make reflection) then you need mirror matrix (don't forget to adjust frontface/backface culling):
mat4(1.0, 0.0, 0.0, 0.0,
0.0, -1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0 );
AND you must know where the floor is.
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * (a_position * mirrorMatrix - floor);
Alternatively you could put floor translation into same matrix. Basically, to mirror against arbitrary height, you need to combine three transforms (pseudocode).
translate(0, -floorHeight, 0) * scale(1, -1, 1) * translate(0, floorHeight, 0).
and put them into your matrix.
Also it might make sense to split modelView matrix into "model"(object/world) and "view" matrices. This way it'll be easier to perform transformations like these.