How to texture non-unwrapped model using a cubemap - opengl-es

I have lots of models that ain't unwrapped (they don't have UV coordinates). They are quite complex to unwrap them. Thus, I decided to texture them using a seamless cubemap:
[VERT]
attribute vec4 a_position;
varying vec3 texCoord;
uniform mat4 u_worldTrans;
uniform mat4 u_projTrans;
...
void main()
{
gl_Position = u_projTrans * u_worldTrans * a_position;
texCoord = vec3(a_position);
}
[FRAG]
varying vec3 texCoord;
uniform samplerCube u_cubemapTex;
void main()
{
gl_FragColor = textureCube(u_cubemapTex, texCoord);
}
It works, but the result is quite weird due to texturing depends on the vertices position. If my model is more complex than a cube or sphere, I see visible seams and low resolution of the texture on some parts of the object.
Reflection is mapped good on the model, but it has a mirror effect.
Reflection:
[VERT]
attribute vec3 a_normal;
varying vec3 v_reflection;
uniform mat4 u_matViewInverseTranspose;
uniform vec3 u_cameraPos;
...
void main()
{
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 n = normalize(normalMatrix * a_normal);
//calculate reflection
vec3 vView = a_position.xyz - u_cameraPos.xyz;
v_reflection = reflect(vView, n);
...
}
How to implement something like a reflection, but with “sticky” effect, which means that it’s as if the texture is attached to a certain vertex (not moving). Each side of the model must display its own side of the cubemap, and as a result it should look like a common 2D texturing. Any advice will be appreciated.
UPDATE 1
I summed up all comments and decided to calculate cubemap UV. Since I use LibGDX, some names may differ from OpenGL ones.
Shader class:
public class CubemapUVShader implements com.badlogic.gdx.graphics.g3d.Shader {
ShaderProgram program;
Camera camera;
RenderContext context;
Matrix4 viewInvTraMatrix, viewInv;
Texture texture;
Cubemap cubemapTex;
...
#Override
public void begin(Camera camera, RenderContext context) {
this.camera = camera;
this.context = context;
program.begin();
program.setUniformMatrix("u_matProj", camera.projection);
program.setUniformMatrix("u_matView", camera.view);
cubemapTex.bind(1);
program.setUniformi("u_textureCubemap", 1);
texture.bind(0);
program.setUniformi("u_texture", 0);
context.setDepthTest(GL20.GL_LEQUAL);
context.setCullFace(GL20.GL_BACK);
}
#Override
public void render(Renderable renderable) {
program.setUniformMatrix("u_matModel", renderable.worldTransform);
viewInvTraMatrix.set(camera.view);
viewInvTraMatrix.mul(renderable.worldTransform);
program.setUniformMatrix("u_matModelView", viewInvTraMatrix);
viewInvTraMatrix.inv();
viewInvTraMatrix.tra();
program.setUniformMatrix("u_matViewInverseTranspose", viewInvTraMatrix);
renderable.meshPart.render(program);
}
...
}
Vertex:
attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec3 a_normal;
attribute vec3 a_tangent;
attribute vec3 a_binormal;
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform mat4 u_matProj;
uniform mat4 u_matView;
uniform mat4 u_matModel;
uniform mat4 u_matViewInverseTranspose;
uniform mat4 u_matModelView;
void main()
{
gl_Position = u_matProj * u_matView * u_matModel * a_position;
v_texCoord = a_texCoord0;
//CALCULATE CUBEMAP UV (WRONG!)
//I decided that tm_l2g mentioned in comments is u_matView * u_matModel
v_cubeMapUV = vec3(u_matView * u_matModel * vec4(a_normal, 0.0));
/*
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 t = normalize(normalMatrix * a_tangent);
vec3 b = normalize(normalMatrix * a_binormal);
vec3 n = normalize(normalMatrix * a_normal);
*/
}
Fragment:
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform sampler2D u_texture;
uniform samplerCube u_textureCubemap;
void main()
{
vec3 cubeMapUV = normalize(v_cubeMapUV);
vec4 diffuse = textureCube(u_textureCubemap, cubeMapUV);
gl_FragColor.rgb = diffuse;
}
The result is completely wrong:
I expect something like that:
UPDATE 2
The texture looks stretched on the sides and distorted in some places if I use vertices position as a cubemap coordinates in the vertex shader:
v_cubeMapUV = a_position.xyz;
I uploaded euro.blend, euro.obj and cubemap files to review.

that code works only for meshes that are centered around (0,0,0) if that is not the case or even if (0,0,0) is not inside the mesh then artifacts occur...
I would start with computing BBOX BBOXmin(x0,y0,z0),BBOXmax(x1,y1,z1) of your mesh and translate the position used for texture coordinate so its centered around it:
center = 0.5*(BBOXmin+BBOXmax);
texCoord = vec3(a_position-center);
However non uniform vertex density would still lead to texture scaling artifacts especially if BBOX sides sizes differs too much. Rescaling it to cube would help:
vec3 center = 0.5*(BBOXmin+BBOXmax); // center of BBOX
vec3 size = BBOXmax-BBOXmin; // size of BBOX
vec3 r = a_position-center; // position centered around center of BBOX
r.x/=size.x; // rescale it to cube BBOX
r.y/=size.y;
r.z/=size.z;
texCoord = r;
Again if the center of BBOX is not inside mesh then this would not work ...
The reflection part is not clear to me do you got some images/screenshots ?
[Edit1] simple example
I see it like this (without the center offsetting and aspect ratio corrections mentioned above):
[Vertex]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
[Fragment]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec4 pixel_col;
in smooth vec3 pixel_txr;
uniform samplerCube txr_skybox;
out layout(location=0) vec4 frag_col;
//------------------------------------------------------------------
void main(void)
{
frag_col=texture(txr_skybox,pixel_txr);
}
//------------------------------------------------------------------
And here preview:
The white torus in first few frames are using fixed function and the rest is using shaders. As you can see the only input I use is the vertex position,color and transform matrices tm_l2g which converts from mesh coordinates to global world and tm_g2s which holds the perspective projection...
As you can see I render BBOX with the same CUBE MAP texture as I use for rendering the model so it looks like cool reflection/transparency effect :) (which was not intentional).
Anyway When I change the line
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
into:
pixel_txr=pos;
In my vertex shader the object will be solid again:
You can combine both by passing two texture coordinate vectors and fetching two texels in fragment adding them with some ratio together. Of coarse you would need to pass 2 Cube map textures one for object and one for skybox ...
The red warnings are from my CPU side code reminding me that I am trying to set uniforms that are not present in the shaders (as I did this from the bump mapping example without changing CPU side code...)
[Edit1] here preview of your mesh with offset
The Vertex changes a bit (just added the offsetting described in the answer):
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
uniform vec3 center=vec3(0.0,0.0,2.0);
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=pos-center;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
So by offsetting the center point you can get rid of the singular point distortion however as I mentioned in comments for arbitrary meshes there will be always some distortions with cheap texturing tricks instead of proper texture coordinates.
Beware my mesh was resized/normalized (sadly I do not remeber if its <-1,+1> range or different ona and too lazy to dig in my source code of the GLSL engine I tested this in) so the offset might have different magnitude in your environment to achieve the same result.

Related

Three.js renders unprocessed png image for texture

With three.js, I am trying to create the scene where a plane becomes transparent as the camera moves away from it.
And I textured the plane object with the round map tile which is edited from the square image below.
When I load the round image through ShaderMaterial the texture appears square like the original image.
The weird thing is it is rendered as intended when the image is loaded onto regular mesh material.
Could you tell me why three.js behaves this way? Also, how might I render round tile using shader while keeping its functionality to fade based on distance?
the full code is available here: https://codesandbox.io/s/tile-with-shader-7kw5v?file=/src/index.js
Here is an option, that takes in count only x and z coords of the plane and the camera.
vertex.glsl:
varying vec4 vPosition;
varying vec2 vUv;
void main() {
vPosition = modelMatrix * vec4(position, 1.);
vUv = uv;
gl_Position = projectionMatrix * viewMatrix * vPosition;
}
frag.glsl:
uniform vec3 u_color;
uniform vec3 u_camera;
uniform vec3 u_plane;
uniform float u_rad;
uniform sampler2D u_texture;
varying vec4 vPosition;
varying vec2 vUv;
void main() {
vec4 textureCol = texture2D(u_texture, vUv);
float rad = distance(vPosition.xz, u_camera.xz); // xz-plane
textureCol.a = 1.0 - step(1., rad / u_rad);
gl_FragColor = textureCol;
}
and u_rad uniform is
u_rad: { value: 50 },

Having some wierd artifacting and odd triangle shadows with SSAO Opengl Implmentation

I have been working on implementing SSAO into the engine I am writing, and a major problem has arrived. Everything was going quite well until I realized that my SSAO was not working correctly. There are two things that I can find that are wrong with my SSAO and I am unable to figure out how to remedy them.
My shader code is at the end of this post, before that I will be describing the problems with images.
Firstly, as seen in the below screenshot, there are some wierd artifacts showing up based on the angle of viewing. So far I am assuming the way I am implementing the View matrix is wrong. I have done a lot of research about how this all should work and I understand it in theory. However, in practice things are not changing as I would expect.
Secondly, whenever I get close to the blocks, I get very odd triangle shadows that appear around the edges of the screen, as shown in the next screenshot.
[![Odd triangle shadows around screen][2]][2]
These two images show the main issues I am having. I am using a deferred type Renderer to render the geometry to a few textures (Position, normals, color) the importing these textures and using them to manipulate the final output. The first two codeblocks are the vertex and fragment shaders respectively for translating the geometry to textures.
Vertex Shader
#version 430 core
layout(location=0) in mat4 modelMatrix;
layout(location=4) in vec4 VertexPosition;
layout(location=5) in vec4 VertexNormal;
layout(location=6) in vec3 VertexColor;
layout(location=7) in vec2 TextureCoords;
out vec4 vNormal;
out vec3 vColor;
out vec4 shaderCoord;
out vec2 texCoords;
layout(location=8) uniform mat4 V;
layout(location=12) uniform mat4 P;
void main()
{
shaderCoord = (V*modelMatrix * VertexPosition);
mat4 normalMatrix = transpose(inverse(V*modelMatrix));
vNormal = (normalMatrix*VertexNormal);
texCoords = TextureCoords;
vColor = VertexColor;
gl_Position = P*shaderCoord;
}
Fragment Shader
#version 430 core
in vec4 vNormal;
in vec3 vColor;
in vec4 shaderCoord;
in vec2 texCoords;
layout (location=0) out vec4 NormalBuffer;
layout (location=1) out vec4 ColorBuffer;
layout (location=2) out vec4 PositionBuffer;
layout (location=3) out vec4 TextureCoordBuffer;
out float fragDepth;
//Start of the main function.
void main()
{
NormalBuffer = vec4(normalize(vNormal).xyz, 1.0);
ColorBuffer = vec4(vColor, 1.0);
PositionBuffer = vec4(shaderCoord.xyz, 1.0);
TextureCoordBuffer = vec4(texCoords, 0.0, 1.0);
fragDepth = gl_FragCoord.z;
}
As you can see, I am translating everything from world space to view space before I write them to the textures. I would much prefer to keep them in world space but when I do, the entire screen looks white with occasional hints of shadows, but the background swaps between white and black depending on camera angle.
Next are my SSAO shaders, In order to implement these I followed a few tutorials, so they probably look familiar. If the tutorial was correct, the next two shaders should work correctly but they are not.
Vetex shader that just creates a quad, and applies the final texture to it.
#version 430 core
layout (location=0) in vec3 VertexPosition;
layout (location=1) in vec2 TextureCoords;
out vec2 texCoords;
void main (){
texCoords = TextureCoords;
gl_Position = vec4(VertexPosition, 1.0);
}
Fragment shader for SSAO
#version 430 core
in vec2 texCoords;
layout (location=0) out vec4 fColor;
uniform sampler2D NormalBuffer;
uniform sampler2D positionBuffer;
uniform sampler2DArrayShadow shadowMap;
uniform sampler1D SSAOKernelMap;
uniform sampler2D SSAONoiseMap;
layout(location=12) uniform mat4 P;
layout(location=8) uniform mat4 V;
uniform uint kernelSize;
uniform vec2 windowSize;
//Define Variables for SSAO Processing.
float radius = 0.5;
float SSAOBias = 0.025;
float power = 1.5;
//mat4 biasMatrix = mat4(0.5,0.0,0.0,0.0,0.0,0.5,0.0,0.0,0.0,0.0,0.5,0.0,0.5,0.5,0.5,1.0);
void main()
{
//Retrieve from textures
vec3 shaderCoord = (texture(positionBuffer, texCoords)).xyz;
vec3 vNormal = normalize((texture(NormalBuffer, texCoords)).rgb);
//process SSAO
vec2 NoiseScale = vec2(windowSize.x/4.0, windowSize.y/4.0);
vec3 randVec = normalize(texture(SSAONoiseMap, texCoords*NoiseScale).xyz);
vec3 tangent = normalize(randVec - vNormal * dot(randVec, vNormal));
vec3 bitTangent = cross(vNormal, tangent);
mat3 TBN = mat3(tangent, bitTangent, vNormal);
//Begin Processing of SSAO with inputed Kernel Samples
float Occlusion = 0.0;
for(int i=0; i<kernelSize; i++){
vec4 kernelSample = texture(SSAOKernelMap, i);
vec3 TSample = TBN*kernelSample.rgb;
TSample = shaderCoord + TSample * radius;
vec4 newCoord = vec4(TSample, 1.0);
newCoord = P*newCoord;
newCoord.xyz /= newCoord.w;
newCoord.xyz = newCoord.xyz * 0.5 + 0.5;
float sampleDepth = texture(positionBuffer,newCoord.xy).z;
//float rangeCheck = smoothstep(0.0,1.0, radius / abs(shaderCoord.z-sampleDepth));
Occlusion += (sampleDepth >= TSample.z+SSAOBias?1.0:0.0);
}
Occlusion = 1.0 - (Occlusion/kernelSize);
fColor = vec4(vec3(Occlusion),1.0f);
}
That is all the information I can think to provide initially. Any help you guys can provide would be immensely helpful! If any other information would help, please let me know and I will be happy to provide.
EDIT:
I figured out that one of my issues was the way that I was accessing the 1D texture above. This made all the kernel samples very strange. I fixed that and now I am getting something like the image below, where half the screen is darker and half the screen is lighter on one side and darker on the other. The contrast line moves with the camera.
Any help with this issue would be immensely appreciated!
I have found two things that were wrong that mostly resolved the issue that this current post is about.
Firstly, the format which I was passing in the kernelMap was off and so all the values were quite skewed.
Secondly, I was unable to figure out why but when I passed the position and normal values to the Lightingfragment shader in world space and then applied the view and projection matrices to them, they would turn out very strangely. However if I applied the view and projection matrices to the position and normal values in the BaseGeometry shader, then reverted that application in the Lighting shader everything works perfectly.
If i find out any more information I will happily post here to update any future searchers.

Diffuse lighting artifacts (OpenGL 4)

I'm trying to implement simplest possible diffuse lighting after reading a few tutorials, but fail miserably.
I load 3d mesh from Wavefront obj file and if I don't apply lighting, it looks just fine. But, when I do apply lighting, the animal looks like a chessboard and the cube is messed up even more:
animal comparison (with normals)
cube comparison (with normals)
Here is the vertex shader:
#version 400
layout (location = 0) in vec4 a_position;
layout (location = 1) in vec3 a_texCoords;
layout (location = 2) in vec3 a_normal;
uniform mat4 u_viewProjection;
uniform mat4 u_model;
out vec3 v_fragPos;
out vec3 v_fragNormal;
void main() {
v_fragPos = a_position.xyz;
v_fragNormal = a_normal;
gl_Position = u_viewProjection * a_position;
}
I pass fragment position and normals to the fragment shader as-is because I'm not applying any model transformations, I simply assume that a model already has world coordinates after loading it from file (forget about u_model uniform, it's not used for now).
Then, the fragment shader:
#version 400
uniform vec3 u_lightPos;
uniform vec3 u_lightColor;
uniform vec3 u_diffuseColor;
uniform vec3 u_specularColor;
uniform vec3 u_ambientColor;
in vec3 v_fragPos;
in vec3 v_fragNormal;
out vec4 o_fragColor;
void main() {
vec3 lightDir = u_lightPos - v_fragPos;
float cosTheta = max(dot(normalize(v_fragNormal), normalize(lightDir)), 0.0);
vec3 diffuseContribution = cosTheta * u_lightColor;
o_fragColor = vec4(u_diffuseColor * diffuseContribution, 1.0);
}
No model or normal matrices used, because no rotations (or scale) are applied to model for now.
I thought about incorrect normals, but at least a simple cube should have correct ones, right?
Also, maybe I should mention that I'm using NSOpenGLView under MacOS.
Any help will be appreciated.
Thank you!
UPDATE:
Adding VBO setup.
This is how single vertex looks like:
struct Vertex1P1N1UV {
glm::vec4 mPosition;
glm::vec3 mTextureCoords;
glm::vec3 mNormal;
Vertex1P1N1UV();
Vertex1P1N1UV(glm::vec4 position, glm::vec3 texcoords, glm::vec3 normal);
};
I initialize my VAO like this
auto* VAO = new GLVertexArray<Vertex1P1N1UV>();
VAO->initialize(subMesh.vertices(), GLVertexArrayLayoutDescription({
static_cast<int>(glm::vec4::length() * sizeof(GLfloat)),
static_cast<int>(glm::vec3::length() * sizeof(GLfloat)),
static_cast<int>(glm::vec3::length() * sizeof(GLfloat)) }));
VAO initialize method
void initialize(const std::vector<Vertex>& vertices, const GLVertexArrayLayoutDescription& layoutDescription) {
bind();
mVertexBuffer.initialize(vertices);
GLuint offset = 0;
for (GLuint location = 0; location < layoutDescription.getAttributeSizes().size(); location++) {
glEnableVertexAttribArray(location);
GLsizei attributeSize = layoutDescription.getAttributeSizes()[location];
glVertexAttribPointer(location, attributeSize / sizeof(GLfloat), GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void *>(offset));
offset += attributeSize;
}
}
And buffer initialize method
void initialize(const std::vector<Vertex>& data) override {
bind();
glBufferData(GL_ARRAY_BUFFER, data.size() * sizeof(Vertex), data.data(), GL_STATIC_DRAW);
}
Vertex will become Vertex1P1N1UV in this case
UPDATE:
Implemented normal visualization (reuploaded screenshots, scroll to top).
What bothers me is that I can see normals through the mesh, like it's transparent despite opaque color. Is this a depth testing problem?
After 2 days of struggling I found the problem.
I did not enable depth buffer on NSOpenGLView. That is one line of code.
If someone stumbles upon this, he can look here OpenGL GL_DEPTH_TEST not working
for a solution.

Lighting not dynamically changing on objects when moved

I'm having trouble with my lighting source and objects in my webGL app. In my "drawScene" function, i load the view port, clear the view, then render my light. After i identify my matrix and render my VBOs (with pushing and poping the matrix).
What happens then once i load my app, the light source is correct, its ontop of the object
I then move the light source to the left, and the lighting displays correctly on the object as it should
Then here is the problem. I move the object to the left, past the light source but the lighting on the object does not move to the right side of the object
If i were to move the lighting all the way to the left, it only changes on the object once the light passes the initial starting position (0, 0).
I thought it was a push and poping matrix issue, but i've corrected the architecture of the code. Light renders first, then the matrix is called, and then the object. It may be a issue with my shaders but i can not tell... Here is what my shaders look like. Think anyone could help me out?
<script id="shader-fs" type="x-shader/x-fragment"> // Textured, lit, normal mapped frag shader precision mediump float;
// uniforms from app
uniform sampler2D samplerD; // diffuse texture map
uniform sampler2D samplerN; // normal texture map
uniform vec3 uLightColor; // directional light color
uniform vec3 uAmbientColor; // ambient light color
// interpolated values from vertex shader
varying vec2 vTextureCoord;
varying vec3 vLightDir;
void main()
{
// get the color values from the texture and normalmap
vec4 clrDiffuse = texture2D(samplerD, vTextureCoord);
vec3 clrNormal = texture2D(samplerN, vTextureCoord).rgb;
// scale & normalize the normalmap color to get a normal vector for this texel
vec3 normal = normalize(clrNormal * 2.0 - 1.0);
// Calc normal dot lightdir to get directional lighting value for this texel.
// Clamp negative values to 0.
vec3 litDirColor = uLightColor * max(dot(normal, vLightDir), 0.0);
// add ambient light, then multiply result by diffuse tex color for final color
vec3 finalColor = (uAmbientColor + litDirColor) * clrDiffuse.rgb;
// finally apply alpha of texture for the final color to render
gl_FragColor = vec4(finalColor, clrDiffuse.a);
}
</script>
<script id="shader-vs" type="x-shader/x-vertex"> // Textured, lit, normal mapped vert shader precision mediump float;
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord; // Texture & normal map coords
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform vec3 uLightDir; // Application can set desired light direction
varying vec2 vTextureCoord; // Passed through to frag shader
varying vec3 vLightDir; // Compute transformed light dir for frag shader
void main(void)
{
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
vLightDir = uLightDir;
vLightDir = normalize(vLightDir);
}
</script>
#MaticOblak
function webGLStart() { ... this.light = new Light(); ...
function Light() { ... this.create(); ...
Light.prototype.create = function() {
gl.uniform3fv(shader.prog.lightDir, new Float32Array([0.0, 0.0, 1.0, 1.0]));
gl.uniform3fv(shader.prog.lightColor, new Float32Array([0.8, 0.8, 0.8]));
gl.uniform3fv(shader.prog.ambientColor, new Float32Array([0.2, 0.2, 0.2]));
}
function drawScene() {
...
this.light.render();
mat4.identity(mvMatrix);
mat4.translate(mvMatrix, [-player.getPosition()[0], -
player.getPosition()[1], 0.0]);
mat4.translate(mvMatrix, [0, 0, -20.0]);
this.world.render();
this.player.render();
Light.prototype.render= function() {
gl.uniform3f(shader.prog.lightDir,
this.position[0],this.position[1],this.position[2] );
}

Desktop GLSL without ftransform()

I'm porting a codebase of mine from fixed-function OpenGL 1.x to OpenGL 2.x - Technically OpenGL ES 2.0, but I'm still coding on the desktop, just keeping in mind the limitations that ES 2.0 imposes which are similar to the 3.1 'new' profile.
Problem is, it seems like for anything other than 2D, creating a shader passing in the modelviewprojection matrix as a uniform does not work. Normally I get a black screen, but if I set the Z value of all my vertices to 0 I get stuff to show up.
Putting my shaders in RenderMonkey works when I have ES 2.0 mode enabled, but on standard desktop GL it's just a black screen (no compiler errors/warnings):
vert shader:
uniform mat4 mvp_matrix;
uniform mat4 obj_matrix;
uniform vec4 u_color;
attribute vec3 a_vertex;
attribute vec2 a_texcoord0;
varying vec4 v_color;
varying vec2 v_texcoord0;
void main(void)
{
v_color = u_color;
gl_Position = mvp_matrix * (obj_matrix * vec4(a_vertex, 1.0));
v_texcoord0 = a_texcoord0;
}
frag shader:
uniform sampler2D t_texture0;
varying vec2 v_texcoord0;
varying vec4 v_color;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * v_color;
}
I am passing in the matrices as glUniformMatrix4fv(location, 1, GL_FALSE, mvpMatrix);
This shader works like gold for anything drawn in 2D. What am I doing wrong here? Or am I required to use ftransform() on desktop GL?
One thing I think needs a bit of clarification:
A model matrix transforms an object from object coordinates to world coordinates.
A view matrix transforms the world coordinates to eye coordinates.
A projection matrix converts eye coordinates to clip coordinates.
Based on standard naming conventions, the mvpMatrix is projection * view * model, in that order. There is no other matrices that you need to multiply by. Projection is your projection matrix (either ortho or perspective), view is the camera transform matrix (NOT the modelview), and model is the position, scale, and rotation of your object.
I believe the issue either lies in either multiplying matrices that don't need to be multiplied together or in multiplying matrices in the wrong order. (matrix multiplication isn't commutative)
If you haven't already solved this, I would recommend sending all 3 matrices over separately and later dumping the values back to make sure there are no issues sending the matrices over.
Vertex shader:
attribute vec4 a_vertex;
attribute vec2 a_texcoord0;
varying vec2 v_texcoord0;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main(void)
{
gl_Position = projection * view * model * a_vertex;
v_texcoord0 = a_texcoord0;
}
Fragment Shader:
uniform sampler2D t_texture0;
uniform vec4 u_color;
varying vec2 v_texcoord0;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * u_color;
}
Also I moved the color uniform to the frag shader, passing it through as a varying is unnecessary when all the vertices will have the same color.

Resources