Can we have vert shader without attributes?
#version 300 es
out mediump vec4 basecolor;
uniform ivec2 x1;
void main(void)
{
if(x1 == ivec2(10,20))
basecolor = vec4(0.0, 1.0, 0.0, 1.0);
else
basecolor = vec4(1.0, 0.0, 1.0, 1.0);
gl_PointSize = 64.0;
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}
#version 300 es
in mediump vec4 basecolor;
out vec4 FragColor;
void main(void)
{
FragColor = basecolor;
}
Technically there is nothing in the specification that actually requires you to have vertex attributes. But by the same token, in OpenGL ES 3.0 you have two intrinsically defined in attributes whether you want them or not:
The built-in vertex shader variables for communicating with fixed functionality are intrinsically declared as follows in the vertex language:
in highp int gl_VertexID;
in highp int gl_InstanceID;
This is really the only time it actually makes sense not to have any attributes. You can dynamically compute the position based on gl_VertexID, gl_InstanceID or some combination of both, which is a major change from OpenGL ES 2.0.
Related
Is there a possibility to cast shadow from a plane for which the texture plays a video with a chromakey shader ? My trial seams to answer by NO but I guess my shader is not adapted. The object is a simple PlabeBufferGeometry and the shader is :
vertexShader is :
varying vec2 vUv;
void main() {vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); gl_Position = projectionMatrix * mvPosition;
}
fragmentShader is :
uniform sampler2D vidtexture;
uniform vec3 color;
varying vec2 vUv;
void main()
{ vec3 tColor = texture2D( vidtexture, vUv ).rgb;
float a = (length(tColor - color) - 0.5) * 7.0;
gl_FragColor = vec4(tColor, a);}
Have you tried using the discard keyword? If your frag shader encounters that keyword, it won't render that fragment, it's as if it didn't exist. You could use this to create shadow outlines defined by your chromakey instead of always geting a square shadow.
void main(){
vec3 tColor = texture2D( vidtexture, vUv ).rgb;
float a = (length(tColor - color) - 0.5) * 7.0;
// Do not render pixels that are less than 10% opaque
if (a < 0.1) discard;
gl_FragColor = vec4(tColor, a);
}
This is the same approach Three.js uses for Material.alphaTest in all their built-in materials. You can see the GLSL source code for that command here.
I want to implement a shader like MeshNormalMaterial, but I have no idea how to convert normal to color.
In THREE.js:
My test1:
varying vec3 vNormal;
void main(void) {
vNormal = abs(normal);
gl_Position = matrix_viewProjection * matrix_model * vec4(position, 1.0);
}
varying vec3 vNormal;
void main(void) {
gl_FragColor = vec4(vNormal, 1.0);
}
My test2:
varying vec3 vNormal;
void main(void) {
vNormal = normalize(normal) * 0.5 + 0.5;
gl_Position = matrix_viewProjection * matrix_model * vec4(position, 1.0);
}
varying vec3 vNormal;
void main(void) {
gl_FragColor = vec4(vNormal, 1.0);
}
These are just test, I can't find any resources about how to calculate the color...
Can anyone help me ?
Thanks.
If you want to see the normal vector in view space, the you have to transform the normal vector from the model space to the world space and from the world space to the view space. This can be done in one step by transforming the normal vector with the normalMatrix.
varying vec3 vNormal;
void main(void)
{
vNormal = normalMatrix * normalize(normal);
gl_Position = matrix_viewProjection * matrix_model * vec4(position, 1.0);
}
Since a varying variable is interpolated when it is passed from the vertex shader to the fragment shader, according to its Barycentric coordinates, the transformation to the color should be done in the fragment shader. Note, after the interpolation the normal vector has to be normalized again
varying vec3 vNormal;
void main(void)
{
vec3 view_nv = normalize(vNormal);
vec3 nv_color = view_nv * 0.5 + 0.5;
gl_FragColor = vec4(nv_color, 1.0);
}
Since the normal vector is normalized, its component are in the range [-1.0, 1.0]. How to represent it as a color is up to you.
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value.
With the formula normal * 0.5 + 0.5 the intensity increases from 0 to 1.
In common the x component is represented red, the y component is green and the z component is blue.
The colors can be saturated, by dividing with the maximum value of its components:
varying vec3 vNormal;
void main(void)
{
vec3 view_nv = normalize(vNormal);
vec3 nv_color = abs(view_nv);
nv_color /= max(nv_color.x, max(nv_color.y, nv_color.z));
gl_FragColor = vec4(nv_color, 1.0);
}
I am making a game with a fog of war layer covering the board. I want to have a cursor that shows up when the player mouses over a tile, and I'm implementing this with a glow effect around the tile, also implemented using a shader.
I'm running into a strange issue: the glow effect works fine for positive x values (when the camera is set at x = -250, y = 250) but I can't see it for negative x values unless the camera gets rotated to almost completely vertical (or I move the camera underneath the fog of war layer).
It's hard to explain, so I've made a CodePen demonstrating the problem: https://codepen.io/jakedluhy/pen/QqzajN?editors=0010
I'm pretty new to custom shaders, so any insight or help would be appreciated. Here's the shaders for the fog of war:
// Vertex
varying vec4 vColor;
void main() {
vec3 cRel = cameraPosition - position;
float dx = (20.0 * cRel.x) / cRel.y;
float dz = (20.0 * cRel.z) / cRel.y;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(
position.x + dx,
position.y,
position.z + dz,
1.0
);
vColor = vec4(0.0, 0.0, 0.0, 0.7);
}
// Fragment
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
And the shaders for the "glow":
// Vertex
varying vec4 vColor;
attribute float alpha;
void main() {
vColor = vec4(color, alpha);
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position, 1.0);
}
// Fragment
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
The math in the vertex shader for the fog of war is to keep the fog in a relative position to the game board.
Tagging THREE.js and glsl because I'm not sure whether this is a THREE.js exclusive problem or not...
Edit: version 0.87.1
Your example looks pretty weird. By setting depthWrite:false on your fog material the two boxes render.
version 0.87.1
I have a 3x3 homography matrix that works correctly with OpenCV's warpPerspective, but I need to do the warping on GPU for performance reasons. What is the best approach? I tried multiplying in the vertex shader to get the texture coordinates and then render a quad, but I get strange distortions. I'm not sure if it's the interpolation not working as I expect. Attaching output for comparison (it involves two different, but close enough shots).
Absolute difference of warp and other image from GPU:
Composite of warp and other image in OpenCV:
EDIT:
Following are my shaders: the task is image rectification (making epilines become scanlines) + absolute difference.
// Vertex Shader
static const char* warpVS = STRINGIFY
(
uniform highp mat3 homography1;
uniform highp mat3 homography2;
uniform highp int width;
uniform highp int height;
attribute highp vec2 position;
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
highp vec2 convertToTexture(highp vec3 pixelCoords) {
pixelCoords /= pixelCoords.z; // need to project
pixelCoords /= vec3(float(width), float(height), 1.0);
pixelCoords.y = 1.0 - pixelCoords.y; // origin is in bottom left corner for textures
return pixelCoords.xy;
}
void main(void)
{
gl_Position = vec4(position / vec2(float(width) / 2.0, float(height) / 2.0) - vec2(1.0), 0.0, 1.0);
gl_Position.y = -gl_Position.y;
highp vec3 initialCoords = vec3(position, 1.0);
refTexCoords = convertToTexture(homography1 * initialCoords);
curTexCoords = convertToTexture(homography2 * initialCoords);
}
);
// Fragment Shader
static const char* warpFS = STRINGIFY
(
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
uniform mediump sampler2D refTex;
uniform mediump sampler2D curTex;
uniform mediump sampler2D maskTex;
void main(void)
{
if (texture2D(maskTex, refTexCoords).r == 0.0) {
discard;
}
if (any(bvec4(curTexCoords[0] < 0.0, curTexCoords[1] < 0.0, curTexCoords[0] > 1.0, curTexCoords[1] > 1.0))) {
discard;
}
mediump vec4 referenceColor = texture2D(refTex, refTexCoords);
mediump vec4 currentColor = texture2D(curTex, curTexCoords);
gl_FragColor = vec4(abs(referenceColor.r - currentColor.r), 1.0, 0.0, 1.0);
}
);
I think you just need to do the projection per pixel. Make refTexCoords and curTexCoords at least vec3, then do the /z in the pixel shader before texture lookup. Even better use the textureProj GLSL instruction.
You want to do everything that is linear in the vertex shader, but things like projection need to be done in the fragment shader per pixel.
This link might help with some background: http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/
I want to write a shader that creates a reflection of an image similiar to the ones used for coverflows.
// Vertex Shader
uniform highp mat4 u_modelViewMatrix;
uniform highp mat4 u_projectionMatrix;
attribute highp vec4 a_position;
attribute lowp vec4 a_color;
attribute highp vec2 a_texcoord;
varying lowp vec4 v_color;
varying highp vec2 v_texCoord;
mat4 rot = mat4( -1.0, 0.0, 0.0, 0.0,
0.0, -1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0 );
void main()
{
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * a_position * rot;
v_color = a_color;
v_texCoord = a_texcoord;
}
// Fragment Shader
varying highp vec2 v_texCoord;
uniform sampler2D u_texture0;
uniform int slices;
void main()
{
lowp vec3 w = vec3(1.0,1.0,1.0);
lowp vec3 b = vec3(0.0,0.0,0.0);
lowp vec3 mix = mix(b, w, (v_texCoord.y-(float(slices)/10.0)));
gl_FragColor = texture2D(u_texture0,v_texCoord) * vec4(mix, 1.0);
}
But this shader is creating the following:
current result
And I dont know how to "flip" the image horizontally and I tried so many different parameters in the rotation matrix (I even tried to use a so called "mirror matrix") but I dont know how to reflect the image on the bottom of original image.
If you're talking about what images.google.com returns for "coverflow" result, then you don't need rotation matrix at all.
void main()
{
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * a_position;
v_color = a_color;
v_texCoord = vec2(a_texcoord.x, 1.0 - a_texcoord.y);
}
Simply flip it vertically.
If you insist on using matrix and want to make a "mirror" shader (the one that takes it object, and puts it under "floor" to make reflection) then you need mirror matrix (don't forget to adjust frontface/backface culling):
mat4(1.0, 0.0, 0.0, 0.0,
0.0, -1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0 );
AND you must know where the floor is.
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * (a_position * mirrorMatrix - floor);
Alternatively you could put floor translation into same matrix. Basically, to mirror against arbitrary height, you need to combine three transforms (pseudocode).
translate(0, -floorHeight, 0) * scale(1, -1, 1) * translate(0, floorHeight, 0).
and put them into your matrix.
Also it might make sense to split modelView matrix into "model"(object/world) and "view" matrices. This way it'll be easier to perform transformations like these.