WebGL Jagged Edges (Anti-Aliasing) - opengl-es

So I'm currently rendering my terrain using vertices and color data only (no indices, no textures), and maybe when I add textures in later the problem will fix itself. Both at a distance and close up, I seem to be getting jagged edges where the edge of the terrain meets the background.
It seems like this problem could be fixed by using antialiasing/multisampling, but I was under the impression that by setting up a WebGL context with default parameters, it will use antialiasing by default so this shouldn't be a problem.
Here's what the problem looks like:
Here's my context init code:
gl = WebGLUtils.setupWebGL(canvas);
if (!gl) {
return;
}
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.enable(gl.DEPTH_TEST);
By passing no additional options to setupWebGL, I'm assuming it'll use the default of using antialiasing...
Here's the applicable init code:
this.buffers['Position'] = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, this.buffers[attrib]);
gl.bufferData(gl.ARRAY_BUFFER, this.getData(attrib), gl.STATIC_DRAW);
this.buffers['Color'] = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, this.buffers[attrib]);
gl.bufferData(gl.ARRAY_BUFFER, this.getData(attrib), gl.STATIC_DRAW);
Here's the applicable draw code:
gl.bindBuffer(gl.ARRAY_BUFFER, this.buffers['Position']);
gl.vertexAttribPointer(
this.shader.getAttribute('Position'),
this.getItemSize('Position'),
gl.FLOAT,
false,
0,
0
);
gl.bindBuffer(gl.ARRAY_BUFFER, this.buffers['Color']);
gl.vertexAttribPointer(
this.shader.getAttribute('Color'),
this.getItemSize('Color'),
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, this.getNumItems());

Anti-aliasing seems to be force-disabled in WebGL on the hardware I was using (MacBook Air, Intel HD 4000).

Related

Orbiting a cube in WebGL with glMatrix

https://jsfiddle.net/sepoto/Ln7qvv7w/2/
I have a base set up to display a cube with different colored faces. What I am trying to do is set up a camera and apply a combined X axis and Y axis rotation so that the cube spins around both axis concurrently. There seems to be some problems with the matrices I set up as I can see the blue face doesn't look quite right. There are some examples of how this is done using older versions of glMatrix however the code in the examples no longer works because of some changes in vec4 of the glMatrix library. Does anyone know how this can be done using the latest version of glMatrix as I have attached a CDN to the fiddle?
Thank you!
function drawScene() {
gl.viewport(0,0,gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.ortho( mOrtho, -5, 5, 5, -5, 2, -200);
mat4.identity(mMove);
var rotMatrix = mat4.create();
mat4.identity(rotMatrix);
rotMatrix = mat4.fromYRotation(rotMatrix, yRot,rotMatrix);
rotMatrix = mat4.fromXRotation(rotMatrix, xRot,rotMatrix);
mat4.multiply(mMove, rotMatrix, mMove);
setMatrixUniforms();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBuffer);
gl.vertexAttribPointer(shaderProgram.vertexColorAttribute, triangleColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLES, 0, triangleVertexPositionBuffer.numItems);
yRot += 0.01;
xRot += 0.01;
}
As the name says, fromYRotation() initializes a matrix to a given rotation. Hence, you need two temporary matrices for the partial rotations, which you can then combine:
var rotMatrix = mat4.create();
var rotMatrixX = mat4.create();
var rotMatrixY = mat4.create();
mat4.fromYRotation(rotMatrixY, yRot);
mat4.fromXRotation(rotMatrixX, xRot);
mat4.multiply(rotMatrix, rotMatrixY, rotMatrixX);
And the reason why your blue face was behaving strangely, was the missing depth test. Enable it in your initialization method:
gl.enable(gl.DEPTH_TEST);
You dont need to use three matrices:
// you should do allocations outside of the renderloop
var rotMat = mat4.create();
// no need to set the matrix to identity as
// fromYRotation resets rotMats contents anyway
mat4.fromYRotation(rotMat, yRot);
mat4.rotateX(rotMat,xRot);

get transformed vertices positions after vertex shader in THREE.js

I am changing the positions of some vertices inside a vertex shader but i can't find a way to get those new updated vertices positions back inside js (i'm currently using THREE.js : the vertex position of my mesh's vertices always remains the same).
I found this link Retrieve Vertices Data in THREE.js, but glGetTexImage doesn't exist in webgl (and i'm quite skeptical about this floating-point texture method as well).
Thanks for your help !
If reading the data from the buffer is the only problem, you can do that like this:
//render the pass into the buffer
renderer.render( rttScene, rttCamera, rttTexture, true );
// ...
//allocate an array to take the data from the buffer
var width = rttTexture.width;
var height = rttTexture.height;
var pixels = new Uint8Array(4 * width * height);
//get gl context bind buffers, read pixels
var gl = renderer.context;
var framebuffer = rttTexture.__webglFramebuffer;
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.viewport(0, 0, width, height);
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
WebGL - reading pixel data from render buffer
But setting all this up is complicated, and reading the texture might be slow, why not do this on the cpu?

WebGL Element Array Buffers not working

I'm learning WebGL with haxe and I'm stuck on the part describing element arrays. What is suppose to be a square isn't showing up and I dont know why?
var verticesArray = [
0.5, 0.5,
0.5,-0.5,
-0.5,-0.5,
-0.5, 0.5
];
var indicesArray = [0, 1, 3, 1, 2, 3];
var VBO = GL.createBuffer();
GL.bindBuffer(GL.ARRAY_BUFFER, VBO);
GL.bufferData(GL.ARRAY_BUFFER,new Float32Array(verticesArray), GL.STATIC_DRAW);
var EBO = GL.createBuffer();
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, EBO);
GL.bufferData(GL.ELEMENT_ARRAY_BUFFER, new UInt16Array(indicesArray), GL.STATIC_DRAW);
GL.vertexAttribPointer(0, 2, GL.FLOAT, false, 0, 0);
GL.enableVertexAttribArray(0);
GL.useProgram(shaderProgram);
GL.drawElements(GL.TRIANGLES, 6, GL.UNSIGNED_INT, 0);
GL.bindBuffer(GL.ARRAY_BUFFER, null);
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, null);
Here's all the code that's suppose to draw the square I have got the shader programing working for sure
Make sure the type in your call to drawElements matches the provided index array type, which is
gl.UNSIGNED_BYTE for Uint8Array
gl.UNSIGNED_SHORT for Uint16Array
gl.UNSIGNED_INT for Uint32Array (needs OES_element_index_uint extension)
Also VertexAttribPointer and enableVertexAttribArray calls always operate on the buffer currently bound to the ARRAY_BUFFER target, which in this case isn't a problem, but very well become one the way you wrote it if you add additional VBOs. So either set them after creating and binding the buffer or make otherwise sure the correct buffer is bound.

Displaying incorrect texture when handling multiple animated 2d sprites in webGL

I'm working on a fairly simple 2d sprite based game that uses webGL. Individual sprites can be translated, scaled, and rotated. I'm using sprite sheets as textures and then modifying texture coordinates on the fly to create animation effects. Just to make things interesting, new sprites are instanciated on the fly. All of this works fine and everything renders properly when I'm only using two different textures, but it breaks down when I try to add a third. I can have multiple instances of sprites using the two textures, but as soon as I try to create an instance of a sprite with the third texture, it all goes wrong. I'm new to WebGL and I can't seem to find a tutorial that covers multiple textures inside an event loop. I figure I was doing it wrong even with two sprites, but managed to get away with until I added more complexity.
Here's my shaders:
void main() {
// Multiply the position by the matrix.
vec2 position = (u_matrix * vec3(a_position, 1)).xy;
// convert the position from pixels to 0.0 to 1.0
vec2 zeroToOne = position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
v_texCoord = a_texCoord;
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
// our texture
uniform sampler2D u_image0;
uniform sampler2D u_image1;
uniform sampler2D u_image2;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main() {
// Look up a color from the texture.
vec4 textureColor = texture2D(u_image0, v_texCoord);
if (textureColor.a < 0.5)
discard;
else
gl_FragColor = vec4(textureColor.rgb, textureColor.a);
vec4 textureColor1 = texture2D(u_image1, v_texCoord);
if (textureColor1.a < 0.5)
discard;
else
gl_FragColor = vec4(textureColor1.rgb, textureColor1.a);
vec4 textureColor2 = texture2D(u_image2, v_texCoord);
// if (textureColor2.a < 0.5)
// discard;
// else
// gl_FragColor = vec4(textureColor2.rgb, textureColor2.a);
}
</script>
Note how the third conditional block in the fragment shader is commented out. If I include this, it breaks. Well, the code runs, but the textures are all over the place.
This is the code that I run when the the sprite is instanciated, after the texture image loads.
image.onload = function() {
that.buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, that.buffer);
var xMin = 0;
var xMax = that.width;
var yMin = 0;
var yMax = that.height;
// setup a rectangle from 0, that.width to 0, that.height in pixels
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
xMin, yMax,
xMax, yMax,
xMin, yMin,
xMin, yMin,
xMax, yMax,
xMax, yMin]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(globalGL.positionLocation);
gl.vertexAttribPointer(globalGL.positionLocation, 2, gl.FLOAT, false, 0, 0);
// look up where the texture coordinates need to go.
that.texCoordLocation = gl.getAttribLocation(globalGL.program, "a_texCoord");
//create a texture map object and attach to that
that.texMap = new TextureMap({horizontalNum: that.texHorizontalNum, verticalNum: that.texVerticalNum});
var tex = that.texMap.getTile([0, 0]);
// provide texture coordinates for the rectangle.
that.texCoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, that.texCoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
tex.minX, tex.maxY,
tex.maxX, tex.maxY,
tex.minX, tex.minY,
tex.minX, tex.minY,
tex.maxX, tex.maxY,
tex.maxX, tex.minY]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(that.texCoordLocation);
gl.vertexAttribPointer(that.texCoordLocation, 2, gl.FLOAT, false, 0, 0);
// Create a texture.
that.texture = gl.createTexture();
that.u_imageLocation = gl.getUniformLocation(globalGL.program, "u_image" + that.textureIndex);
gl.uniform1i(that.u_imageLocation, that.textureIndex);
gl.activeTexture(gl.TEXTURE0 + that.textureIndex);
gl.bindTexture(gl.TEXTURE_2D, that.texture);
// Set the parameters so we can render any size image.
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
// Upload the image into the texture.
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
globalObj.agents[that.id] = that;
};
"that" is reference to the sprite object that I'm using. that.texMap is an object that tracks the texture coordinate data for the sprite. that.textureIndex is an integer unique to each type of sprite. I also save reference to the GL texture itself as that.texture.
This is what I run in the event loop for each instance of a sprite:
this.draw = function() {
var tex, texCoordLocation, texCoordBuffer, i;
//This pulls up the correct texture coordinates depending on what the sprite is doing.
if (this.shooting) {
tex = this.texMap.getTile([0, 1]);
} else if ( this.moving) {
if (this.moving < 15 / this.speed) {
this.moving++;
tex = this.texMap.getTile();
} else {
this.moving = 1;
tex = this.texMap.getTile('next');
}
} else {
tex = this.texMap.getTile([0, 0]);
}
//binds the texture associated with the sprite.
gl.bindTexture(gl.TEXTURE_2D, this.texture);
//gets a reference to the textCoord attribute
texCoordLocation = gl.getAttribLocation(globalGL.program, 'a_texCoord');
//create a buffer for texture coodinates
texCoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
tex.minX, tex.maxY,
tex.maxX, tex.maxY,
tex.minX, tex.minY,
tex.minX, tex.minY,
tex.maxX, tex.maxY,
tex.maxX, tex.minY]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(texCoordLocation);
gl.vertexAttribPointer(texCoordLocation, 2, gl.FLOAT, false, 0, 0);
var matrixLocation = gl.getUniformLocation(globalGL.program,'u_matrix');
//sets up arrays needed to rotate and translate the sprite
var centerTranslation = [-this.width / 2, -this.height / 2];
var decenterTranslation = [this.width / 2 , this.height / 2];
var translation = [this.x, this.y];
var angleInRadians = this.rotation;
var scale = [1, 1];
// Compute the matrices
var centerTranslationMatrix = makeTranslation(centerTranslation[0], centerTranslation[1]);
var decenterTranslationMatrix = makeTranslation(decenterTranslation[0], decenterTranslation[1]);
var translationMatrix = makeTranslation(translation[0], translation[1]);
var rotationMatrix = makeRotation(angleInRadians);
var scaleMatrix = makeScale(scale[0], scale[1]);
// Multiply the matrices.
var matrix = matrixMultiply(scaleMatrix, centerTranslationMatrix);
matrix = matrixMultiply(matrix, rotationMatrix);
matrix = matrixMultiply(matrix, decenterTranslationMatrix);
matrix = matrixMultiply(matrix, translationMatrix);
// Set the matrix.
gl.uniformMatrix3fv(matrixLocation, false, matrix);
// draw
gl.drawArrays(gl.TRIANGLES, 0, 6);
};
Hopefully this hasn't been to verbose. I've been all over the web looking for tutorials or other scenarios that hanfle this sort of situation and I just can't seem to find anything.
Thanks!
EDIT: So, I know this problem isn't too hard for the community, yet quite a few people have viewed my my question and no one has responded. This prompted me to do a little self-reflection and take a good, long, hard look at my sample code.
I've restructured it significantly. I realized I don't need to be making a new texture every time a sprite is instanciated. Instead, I load up all the textures that I'll need once at the beginning. So the second code block has been completely reworked. It still does a lot of the same stuff, but only runs once for each texture in a for loop at the beginning. I'd be happy to upload the new code if anyone wants to have a look at it, or if someone could point me in the direction of a tutorial that uses multiple textures on multiple 2d quads (more than one quad per texture) in an event loop, I'd be happy to do the research myself.
the issue is that everything will be the same texture, then a new sprite is instanciated and everything turns into the new texture.
From this, I'd suspect you aren't selecting the correct texture before binding (i.e. using gl.activeTexture() with correct parameters). Looks fine in the onload function, but you don't use it in your this.draw() function before binding the sprite texture, which may be the problem.

How to rotate an object and but leaving the lighting fixed? (OpenGL)

I have a cube which I want to rotate. I also have a light source GL_LIGHT0. I want to rotate the cube and leave the light source fixed in its location. But the light source is rotating together with my cube. I use OpenGL ES 1.1
Here's a snippet of my code to make my question more clear.
GLfloat glfarr[] = {...} //cube points
GLubyte glubFaces[] = {...}
Vertex3D normals[] = {...} //normals to surfaces
const GLfloat light0Position[] = {0.0, 0.0, 3.0, 0.0};
glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
glEnable(GL_LIGHT0);
for(i = 0; i < 8000; ++i)
{
if (g_bDemoDone) break;
glLoadIdentity();
glTranslatef(0.0,0.0, -12);
glRotatef(rot, 0.0, 1.0,1.0);
rot += 0.8;
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals);
glVertexPointer(3, GL_FLOAT, 0, glfarr);
glDrawElements(GL_TRIANGLES, 3*12, GL_UNSIGNED_BYTE, glubFaces);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
eglSwapBuffers(eglDisplay, eglSurface);
}
Thanks.
Fixed in relation to what? The light position is transformed by the current MODELVIEW matrix when you do glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
If you want it to move with with the cube you'll have to move glLightfv(GL_LIGHT0, GL_POSITION, light0Position); to after the translation and rotation calls.
The problem seems to be that you're rotating the modelview matrix, not the cube itself. Essentially, you're moving the camera.
In order to rotate just the cube, you'll need to rotate the vertices that make up the cube. Generally that's done using a library (GLUT or some such) or simple trig. You'll be operating on the vertex data stored in the array, before the glDrawElements call. You may/may not have to or want to modify the normals or texture coordinates, it depends on your effects and how it ends up looking.

Resources