Weird behaviour with OpenGL Uniform Buffers on OSX - macos

I am having some weird behaviour with uniform buffers in my hobby OpenGL4.1 engine.
On windows everything works fine (both Intel and Nvidia GPUs) but on my MacBook (also Intel) this isn't working.
So to explain what is happening on OSX: if I hardcode all my Uniform Buffer variables in the actual fragment shader code then I am able to render perfectly fine but if I set them back to the variables - I get nothing.
Had a look at the OpenGL state using apitrace and all the variables values are perfect so I am a bit confused as to what is going on here.
I am hoping this is just a code bug and not some underlying issue with the drivers.
Below is the fragment shader code where if I hardcode all the DirectionLight variables everything works fine.
#version 410
struct DirectionalLightData
{
vec4 Colour;
vec3 Direction;
float Intensity;
};
layout(std140) uniform ObjectBuffer
{
mat4 Model;
};
layout(std140) uniform FrameBuffer
{
mat4 Projection;
mat4 View;
DirectionalLightData DirectionalLight;
vec3 ViewPos;
};
uniform sampler2D PositionMap;
uniform sampler2D NormalMap;
uniform sampler2D AlbedoSpecMap;
layout(location = 0) in vec2 TexCoord;
out vec4 FinalColour;
float CalcDiffuseContribution(vec3 lightDir, vec3 normal)
{
return max(dot(normal, -lightDir), 0.0f);
}
float CalcSpecularContribution(vec3 lightDir, vec3 viewDir, vec3 normal, float specularExponent)
{
vec3 reflectDir = reflect(lightDir, normal);
vec3 halfwayDir = normalize(lightDir + viewDir);
return pow(max(dot(normal, halfwayDir), 0.0f), specularExponent);
}
float CalcDirectionLightFactor(vec3 viewDir, vec3 lightDir, vec3 normal)
{
float diffuseFactor = CalcDiffuseContribution(lightDir, normal);
float specularFactor = CalcSpecularContribution(normal, viewDir, normal, 1.0f);
return diffuseFactor * specularFactor;
}
void main()
{
vec3 position = texture(PositionMap, TexCoord).rgb;
vec3 normal = texture(NormalMap, TexCoord).rgb;
vec3 albedo = texture(AlbedoSpecMap, TexCoord).rgb;
vec3 viewDir = normalize(ViewPos - position);
float directionLightFactor = CalcDirectionLightFactor(viewDir, DirectionalLight.Direction, normal) * DirectionalLight.Intensity;
FinalColour.rgb = albedo * directionLightFactor * DirectionalLight.Colour.rgb;
FinalColour.a = 1.0f * DirectionalLight.Colour.a;
}
Here is the order of where I update and bind the UBO (I have pulled these from apitrace as there is too much code to copy paste here):
glGetActiveUniformBlockName(5, 0, 255, NULL, FrameBuffer);
glGetUniformBlockIndex(5, FrameBuffer) = 0;
glGetActiveUniformBlockName(5, 1, 255, NULL, ObjectBuffer);
glGetUniformBlockIndex(5, ObjectBuffer) = 1;
glBindBuffer(GL_UNIFORM_BUFFER, 1);
glMapBufferRange(GL_UNIFORM_BUFFER, 0, 172,GL_MAP_WRITE_BIT);
memcpy(0x10b9f8000, [binary data, size = 172 bytes], 172);
glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, 2);
glBindBufferBase(GL_UNIFORM_BUFFER, 1, 1);
glBindBuffer(GL_UNIFORM_BUFFER, 2);
glMapBufferRange(GL_UNIFORM_BUFFER, 0, 64, GL_MAP_WRITE_BIT);
memcpy(0x10b9f9000, [binary data, size = 64 bytes], 64);
glUnmapBuffer(GL_UNIFORM_BUFFER);
glUniformBlockBinding(5, 1, 0);
glUniformBlockBinding(5, 0, 1);
glDrawArrays(GL_TRIANGLES, 0, 6);
Note that the FrameBuffer UBO has ID 1 and ObjectBuffer UBO has ID 2

I think when you are using std140 layout your data members should be byte aligned so you cannot mix vec4 and vec3 or float keep all variables mat4 and vec4 else dnt use std140 layout and in application side calculate ubo alignment and offsets of your variables on ubo and set values. See usage of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT.
As experiment change all variables to mat4 and vec4 and see your issue should go away.
If you did not use the std140 layout for a block, you will need to query the byte offset for each uniform within the block. The OpenGL specification explains the storage of each of the basic types, but not the alignment between types. Struct members, just like regular uniforms, each have a separate offset that must be individually queried.

After a few days of digging I seem to have found the issue.
I was not calling glBindBufferBase() after binding a different shader program.
Such a silly mistake caused me so much grief.
Thanks everyone for the help.

Related

Raycasting with InstancedMesh, InstancedBufferGeometry, custom shader

Basic, I can't get raycasting to work with them. My guess is my matrix coordinate calculation method is wrong. Don't know how to do it right.
I set vertex position and offset in vertexShader, and in InstancedMesh, I set the same offset, expecting the the raycast can get the an instanceID, but nothing intersects. You can find my entire code here.
I tried to adapt an official raycasting example here, but can't figure out where I did wrong. My hodgepodge uses: InstancedMesh, InstancedBufferGeometry, custom shader together. My objective is to learn how it works.
My question is where I did wrong?
My vertex shader:
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
attribute vec4 color;
attribute vec3 offset;
varying vec3 vPosition;
varying vec4 vColor;
void main() {
vColor = vec4(color);
vPosition = offset*1.0 + position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition, 1.0 );
// if gl_Position not set, nothing is shown
}
My InstancedMesh matrix setting:
for(let i = 0; i < SQUARE_COUNT; i++) {
transform.position.set(offsets[i], offsets[i+1], offsets[i+2] )
transform.updateMatrix()
mesh.setMatrixAt(i, transform.matrix)
}
The offsets is set before as following:
for(let i = 0; i < SQUARE_COUNT; i++ ) {
offsets.push( 0 + i*0.05, 0 + i*0.05, 0 + i*0.05); // same is set in InstancedMesh
colors.push( Math.random(), Math.random(), Math.random(), Math.random() );
}
The raycaster has no awareness of any nonstandard transformation that you do in your vertex shader. It's just the way it works. It has no way of knowing that you are doing:
vPosition = offset*1.0 + position;
in your shader.
It works by assuming that you are running the bog standard vertex shader with no additional transforms. It assumes that every object you are casting against has a well defined/computed bounding box as well.
If you are going to use raycasting, you may have to make a non-rendered scene that represents your objects in their final rendered positions, and cast against that.

GLSL - different precision in different parts of fragment shader

I have a simple fragment shader that draws test grid pattern.
I don't really have a problem - but I've noticed a weird behavior that's inexplicable to me. Don't mind weird constants - they get filled during shader assembly before compilation. Also, vertexPosition is actual calculated position in world space, so I can move the shader texture when the mesh itself moves.
Here's the code of my shader:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_base = __CONSTANT_COLOR_BASE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
float minWidthX;
float minWidthY;
vec3 color_green = vec3(0.0,1.0,0.0);
void main()
{
// calculate l1 border positions
float dimention = squareSize;
int roundX = int(vertexPosition.x / dimention);
int roundY = int(vertexPosition.z / dimention);
float remainderX = vertexPosition.x - float(roundX)*dimention;
float remainderY = vertexPosition.z - float(roundY)*dimention;
vec3 dyX = dFdy(vec3(vertexPosition.x, vertexPosition.y, 0));
vec3 dxX = dFdx(vec3(vertexPosition.x, vertexPosition.y, 0));
minWidthX = max(length(dxX),length(dyX));
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
minWidthY = max(length(dxY),length(dyY));
//Fill l1 suqares
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
if (remainderY <= minWidthY)
{
fragColor = vec4(color_l1, 1.0);
return;
}
// fill base color
fragColor = vec4(color_base, 1.0);
return;
}
So, with this code everything works well.
I then wanted to optimize it a little bit by moving calculations that only concern horizontal lines after the vertical lines are drawn. Because these calculations are useless if the vertical lines check is true. Like this:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_base = __CONSTANT_COLOR_BASE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
float minWidthX;
float minWidthY;
vec3 color_green = vec3(0.0,1.0,0.0);
void main()
{
// calculate l1 border positions
float dimention = squareSize;
int roundX = int(vertexPosition.x / dimention);
int roundY = int(vertexPosition.z / dimention);
float remainderX = vertexPosition.x - float(roundX)*dimention;
float remainderY = vertexPosition.z - float(roundY)*dimention;
vec3 dyX = dFdy(vec3(vertexPosition.x, vertexPosition.y, 0));
vec3 dxX = dFdx(vec3(vertexPosition.x, vertexPosition.y, 0));
minWidthX = max(length(dxX),length(dyX));
//Fill l1 suqares
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
minWidthY = max(length(dxY),length(dyY));
if (remainderY <= minWidthY)
{
fragColor = vec4(color_l1, 1.0);
return;
}
// fill base color
fragColor = vec4(color_base, 1.0);
return;
}
But even while seemingly this should not affect the result - it does. By quite a bit.
Below are the two screenshots. The first one is the original code, the second - is the "optimized" one. Which works bad.
Original version:
Optimized version (looks much worse):
Notice how the lines became "fuzzy" even though seemingly no numbers should have changed at all.
Note: this isn't because minwidthX/Y are global. I initially optimized by making them local.
I also initially moved RoundY and remainderY calculation below the X check as well, and the result is the same.
Note 2: I tried adding highp keyword for each of those calculations specifically, but that doesn't change anything (not that I expected it to, but I tried nevertheless)
Could anyone please explain to me why this happens? I would like to know for my future shaders, and actually I would like to optimize this one as well. I would like to understand the principle behind precision loss here, because it doesn't make any sense to me.
For the answer I'll refer to OpenGL ES Shading Language 3.20 Specification, which is the same as OpenGL ES Shading Language 3.00 Specification in this point.
8.14.1. Derivative Functions
[...] Derivatives are undefined within non-uniform control flow.
and further
3.9.2. Uniform and Non-Uniform Control Flow
When executing statements in a fragment shader, control flow starts as uniform control flow; all fragments enter the same control path into main(). Control flow becomes non-uniform when different fragments take different paths through control-flow statements (selection, iteration, and jumps).[...]
That means, that the result of the derivative functions in the first case (of your question) is well defined.
But in the second case it is not:
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
because the return statement acts like a selection. And all the code after the code block with the return statement is in non-uniform control flow.

Having some wierd artifacting and odd triangle shadows with SSAO Opengl Implmentation

I have been working on implementing SSAO into the engine I am writing, and a major problem has arrived. Everything was going quite well until I realized that my SSAO was not working correctly. There are two things that I can find that are wrong with my SSAO and I am unable to figure out how to remedy them.
My shader code is at the end of this post, before that I will be describing the problems with images.
Firstly, as seen in the below screenshot, there are some wierd artifacts showing up based on the angle of viewing. So far I am assuming the way I am implementing the View matrix is wrong. I have done a lot of research about how this all should work and I understand it in theory. However, in practice things are not changing as I would expect.
Secondly, whenever I get close to the blocks, I get very odd triangle shadows that appear around the edges of the screen, as shown in the next screenshot.
[![Odd triangle shadows around screen][2]][2]
These two images show the main issues I am having. I am using a deferred type Renderer to render the geometry to a few textures (Position, normals, color) the importing these textures and using them to manipulate the final output. The first two codeblocks are the vertex and fragment shaders respectively for translating the geometry to textures.
Vertex Shader
#version 430 core
layout(location=0) in mat4 modelMatrix;
layout(location=4) in vec4 VertexPosition;
layout(location=5) in vec4 VertexNormal;
layout(location=6) in vec3 VertexColor;
layout(location=7) in vec2 TextureCoords;
out vec4 vNormal;
out vec3 vColor;
out vec4 shaderCoord;
out vec2 texCoords;
layout(location=8) uniform mat4 V;
layout(location=12) uniform mat4 P;
void main()
{
shaderCoord = (V*modelMatrix * VertexPosition);
mat4 normalMatrix = transpose(inverse(V*modelMatrix));
vNormal = (normalMatrix*VertexNormal);
texCoords = TextureCoords;
vColor = VertexColor;
gl_Position = P*shaderCoord;
}
Fragment Shader
#version 430 core
in vec4 vNormal;
in vec3 vColor;
in vec4 shaderCoord;
in vec2 texCoords;
layout (location=0) out vec4 NormalBuffer;
layout (location=1) out vec4 ColorBuffer;
layout (location=2) out vec4 PositionBuffer;
layout (location=3) out vec4 TextureCoordBuffer;
out float fragDepth;
//Start of the main function.
void main()
{
NormalBuffer = vec4(normalize(vNormal).xyz, 1.0);
ColorBuffer = vec4(vColor, 1.0);
PositionBuffer = vec4(shaderCoord.xyz, 1.0);
TextureCoordBuffer = vec4(texCoords, 0.0, 1.0);
fragDepth = gl_FragCoord.z;
}
As you can see, I am translating everything from world space to view space before I write them to the textures. I would much prefer to keep them in world space but when I do, the entire screen looks white with occasional hints of shadows, but the background swaps between white and black depending on camera angle.
Next are my SSAO shaders, In order to implement these I followed a few tutorials, so they probably look familiar. If the tutorial was correct, the next two shaders should work correctly but they are not.
Vetex shader that just creates a quad, and applies the final texture to it.
#version 430 core
layout (location=0) in vec3 VertexPosition;
layout (location=1) in vec2 TextureCoords;
out vec2 texCoords;
void main (){
texCoords = TextureCoords;
gl_Position = vec4(VertexPosition, 1.0);
}
Fragment shader for SSAO
#version 430 core
in vec2 texCoords;
layout (location=0) out vec4 fColor;
uniform sampler2D NormalBuffer;
uniform sampler2D positionBuffer;
uniform sampler2DArrayShadow shadowMap;
uniform sampler1D SSAOKernelMap;
uniform sampler2D SSAONoiseMap;
layout(location=12) uniform mat4 P;
layout(location=8) uniform mat4 V;
uniform uint kernelSize;
uniform vec2 windowSize;
//Define Variables for SSAO Processing.
float radius = 0.5;
float SSAOBias = 0.025;
float power = 1.5;
//mat4 biasMatrix = mat4(0.5,0.0,0.0,0.0,0.0,0.5,0.0,0.0,0.0,0.0,0.5,0.0,0.5,0.5,0.5,1.0);
void main()
{
//Retrieve from textures
vec3 shaderCoord = (texture(positionBuffer, texCoords)).xyz;
vec3 vNormal = normalize((texture(NormalBuffer, texCoords)).rgb);
//process SSAO
vec2 NoiseScale = vec2(windowSize.x/4.0, windowSize.y/4.0);
vec3 randVec = normalize(texture(SSAONoiseMap, texCoords*NoiseScale).xyz);
vec3 tangent = normalize(randVec - vNormal * dot(randVec, vNormal));
vec3 bitTangent = cross(vNormal, tangent);
mat3 TBN = mat3(tangent, bitTangent, vNormal);
//Begin Processing of SSAO with inputed Kernel Samples
float Occlusion = 0.0;
for(int i=0; i<kernelSize; i++){
vec4 kernelSample = texture(SSAOKernelMap, i);
vec3 TSample = TBN*kernelSample.rgb;
TSample = shaderCoord + TSample * radius;
vec4 newCoord = vec4(TSample, 1.0);
newCoord = P*newCoord;
newCoord.xyz /= newCoord.w;
newCoord.xyz = newCoord.xyz * 0.5 + 0.5;
float sampleDepth = texture(positionBuffer,newCoord.xy).z;
//float rangeCheck = smoothstep(0.0,1.0, radius / abs(shaderCoord.z-sampleDepth));
Occlusion += (sampleDepth >= TSample.z+SSAOBias?1.0:0.0);
}
Occlusion = 1.0 - (Occlusion/kernelSize);
fColor = vec4(vec3(Occlusion),1.0f);
}
That is all the information I can think to provide initially. Any help you guys can provide would be immensely helpful! If any other information would help, please let me know and I will be happy to provide.
EDIT:
I figured out that one of my issues was the way that I was accessing the 1D texture above. This made all the kernel samples very strange. I fixed that and now I am getting something like the image below, where half the screen is darker and half the screen is lighter on one side and darker on the other. The contrast line moves with the camera.
Any help with this issue would be immensely appreciated!
I have found two things that were wrong that mostly resolved the issue that this current post is about.
Firstly, the format which I was passing in the kernelMap was off and so all the values were quite skewed.
Secondly, I was unable to figure out why but when I passed the position and normal values to the Lightingfragment shader in world space and then applied the view and projection matrices to them, they would turn out very strangely. However if I applied the view and projection matrices to the position and normal values in the BaseGeometry shader, then reverted that application in the Lighting shader everything works perfectly.
If i find out any more information I will happily post here to update any future searchers.

Diffuse lighting artifacts (OpenGL 4)

I'm trying to implement simplest possible diffuse lighting after reading a few tutorials, but fail miserably.
I load 3d mesh from Wavefront obj file and if I don't apply lighting, it looks just fine. But, when I do apply lighting, the animal looks like a chessboard and the cube is messed up even more:
animal comparison (with normals)
cube comparison (with normals)
Here is the vertex shader:
#version 400
layout (location = 0) in vec4 a_position;
layout (location = 1) in vec3 a_texCoords;
layout (location = 2) in vec3 a_normal;
uniform mat4 u_viewProjection;
uniform mat4 u_model;
out vec3 v_fragPos;
out vec3 v_fragNormal;
void main() {
v_fragPos = a_position.xyz;
v_fragNormal = a_normal;
gl_Position = u_viewProjection * a_position;
}
I pass fragment position and normals to the fragment shader as-is because I'm not applying any model transformations, I simply assume that a model already has world coordinates after loading it from file (forget about u_model uniform, it's not used for now).
Then, the fragment shader:
#version 400
uniform vec3 u_lightPos;
uniform vec3 u_lightColor;
uniform vec3 u_diffuseColor;
uniform vec3 u_specularColor;
uniform vec3 u_ambientColor;
in vec3 v_fragPos;
in vec3 v_fragNormal;
out vec4 o_fragColor;
void main() {
vec3 lightDir = u_lightPos - v_fragPos;
float cosTheta = max(dot(normalize(v_fragNormal), normalize(lightDir)), 0.0);
vec3 diffuseContribution = cosTheta * u_lightColor;
o_fragColor = vec4(u_diffuseColor * diffuseContribution, 1.0);
}
No model or normal matrices used, because no rotations (or scale) are applied to model for now.
I thought about incorrect normals, but at least a simple cube should have correct ones, right?
Also, maybe I should mention that I'm using NSOpenGLView under MacOS.
Any help will be appreciated.
Thank you!
UPDATE:
Adding VBO setup.
This is how single vertex looks like:
struct Vertex1P1N1UV {
glm::vec4 mPosition;
glm::vec3 mTextureCoords;
glm::vec3 mNormal;
Vertex1P1N1UV();
Vertex1P1N1UV(glm::vec4 position, glm::vec3 texcoords, glm::vec3 normal);
};
I initialize my VAO like this
auto* VAO = new GLVertexArray<Vertex1P1N1UV>();
VAO->initialize(subMesh.vertices(), GLVertexArrayLayoutDescription({
static_cast<int>(glm::vec4::length() * sizeof(GLfloat)),
static_cast<int>(glm::vec3::length() * sizeof(GLfloat)),
static_cast<int>(glm::vec3::length() * sizeof(GLfloat)) }));
VAO initialize method
void initialize(const std::vector<Vertex>& vertices, const GLVertexArrayLayoutDescription& layoutDescription) {
bind();
mVertexBuffer.initialize(vertices);
GLuint offset = 0;
for (GLuint location = 0; location < layoutDescription.getAttributeSizes().size(); location++) {
glEnableVertexAttribArray(location);
GLsizei attributeSize = layoutDescription.getAttributeSizes()[location];
glVertexAttribPointer(location, attributeSize / sizeof(GLfloat), GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void *>(offset));
offset += attributeSize;
}
}
And buffer initialize method
void initialize(const std::vector<Vertex>& data) override {
bind();
glBufferData(GL_ARRAY_BUFFER, data.size() * sizeof(Vertex), data.data(), GL_STATIC_DRAW);
}
Vertex will become Vertex1P1N1UV in this case
UPDATE:
Implemented normal visualization (reuploaded screenshots, scroll to top).
What bothers me is that I can see normals through the mesh, like it's transparent despite opaque color. Is this a depth testing problem?
After 2 days of struggling I found the problem.
I did not enable depth buffer on NSOpenGLView. That is one line of code.
If someone stumbles upon this, he can look here OpenGL GL_DEPTH_TEST not working
for a solution.

clash between J3DI, Win7 and Nvidia Geforce

I have done a sphere using J3DI (which is a webGL library) under WIN 7 64bit as OS
and Nvidia Geforce GT330 M as graphic card .
firstly I have done it in blue color and it appeared correctly.
then, I tried to make a texture on it but the sphere appeared like in this image:
http://s1.postimage.org/1ekqrgolg/earth.jpg
while it has appeared in Mac like this :
http://s1.postimage.org/1eksf0138/error.jpg
so, what is the problem? is it from the OS, J3DI or from the graphic card?
for an additional information, the shader script i used this:
notice:this code has been taken from OpenGL and HTML5 (video course from O'reilly)
VertexShader:
uniform mat4 u_modelViewProjMatrix;
uniform mat4 u_normalMatrix;
uniform vec3 lightDir;
attribute vec3 vNormal;
attribute vec4 vTexCoord;
attribute vec4 vPosition;
varying float v_Dot;
varying vec2 v_texCoord;
void main()
{
gl_Position = u_modelViewProjMatrix * vPosition;
v_texCoord = vTexCoord.st;
vec4 transNormal = u_normalMatrix * vec4(vNormal, 1);
v_Dot = max(dot(normalize(transNormal.xyz), normalize(lightDir)), 0.3);
}
PixelShader:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D sampler2d;
varying float v_Dot;
varying vec2 v_texCoord;
void main()
{
vec2 texCoord = vec2(v_texCoord.s, v_texCoord.t);
vec4 color = texture2D(sampler2d, texCoord);
color += vec4(0.1, 0.1, 0.1, 1);
gl_FragColor = vec4(color.xyz * v_Dot, color.a);
}
the context function is:
function(context){
// setup VBOs
context.enableVertexAttribArray(0);
context.enableVertexAttribArray(1);
context.enableVertexAttribArray(2);
context.bindBuffer(context.ARRAY_BUFFER, context.sphere.normalObject);
context.vertexAttribPointer(0, 3, context.FLOAT, false, 0, 0);
context.bindBuffer(context.ARRAY_BUFFER, context.sphere.texCoordObject);
context.vertexAttribPointer(1, 2, context.FLOAT, false, 0, 0);
context.bindBuffer(context.ARRAY_BUFFER, context.sphere.vertexObject);
context.vertexAttribPointer(2, 3, context.FLOAT, false, 0, 0);
context.bindBuffer(context.ELEMENT_ARRAY_BUFFER, context.sphere.indexObject);
//constract the model-view * projection matrix
var mvpMatrix = new J3DIMatrix4(context.perspectiveMatrix);
mvpMatrix.setUniform(context, context.getUniformLocation(context.program, "u_modelViewProjMatrix"), false);
//bind texture
context.bindTexture(context.TEXTURE_2D, this.texture);
context.drawElements(context.TRIANGLES, context.sphere.numIndices, context.UNSIGNED_SHORT, 0);
}
I'm really concern about this issue.
You're almost there. In:
v_Dot = max(dot(normalize(transNormal.xyz), normalize(lightDir)), 0.3);
Change 0.3 at the end to 1.0, that should work (edit: at least for Windows, MAC could be outdated drivers).

Resources