OpenGL - ES drawing and blending - opengl-es

I'm developing program that is described below.
I draw two triangles with different depths.
For below example, I'd like to split green triangle to visible part and hidden part. Then, finally using blending function, the hidden part of the green triangle is colored as transparent, and visible part is colored as original color.
Now, I write codes using opengl-ES (with JNI).
And, I have two questions.
First :
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glUseProgram(gProgram);
glGetUniformLocation(gProgram, "vColor");
const GLfloat gTriangleVertices1[] =
{
-0.5f, -0.5f, -0.5f,
0.0f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
};
float color1[] = {1.0f, 0.0f, 0.0f};
const GLfloat gTriangleVertices2[] =
{
-0.7f, 0.0f, 0.3f,
0.5f, 0.3f, 0.3f,
0.5f, 0.0f, 0.3f,
};
float color2[] = {0.0f, 1.0f, 0.0f};
int mColorHandle1;
int mColorHandle2;
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], color1[3]);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDepthFunc(GL_GREATER);
//glDepthFunc(GL_LESS);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], color2[3]);
glDrawArrays(GL_TRIANGLES, 3, 3);
glDisableVertexAttribArray(gvPositionHandle);
from this code, if I change glDepthFunc(GL_GREATER) to glDepthFunc(GL_LESS), the result shows visible and hidden part correctly.
However, I do not understand why it shows correct answer.
Because, I added vertex gTriangleVertices1, but I do not add gTriangleVertices2.
Even thou I do not add vertices of triangle 2, It gives me correct answer. why?
Second question, I think it is correct to use blending function (I checked it works on glut / freeglut). but why it doesn't work on gl-es.
///////////////////////// visible part /////////////////////////
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], color1[3]);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDepthFunc(GL_LESS);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], color2[3]);
glDrawArrays(GL_TRIANGLES, 3, 3);
glDisableVertexAttribArray(gvPositionHandle);
glDisable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS); // same to initialize depth func
///////////////////////// visible part /////////////////////////
///////////////////////// hidden part /////////////////////////
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.0f);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], color1[3]);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDepthFunc(GL_GREATER);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], 0.5f);
glDrawArrays(GL_TRIANGLES, 3, 3);
glDisableVertexAttribArray(gvPositionHandle);
///////////////////////// hidden part /////////////////////////
I just added blending function. If I uses visible/hidden part alone, it gives correct result. However if I use blending function, it gives strange result as shown below : it gives transparent hidden green triangle.
what's wrong?

The first question:
You created a major bug by saying glDrawArrays(GL_TRIANGLES, 3, 3); as this produces an overflow on your buffer. The result of it is unexpected but in your case your compiler seems to have decided that the two arrays you defined are tightly packed:
const GLfloat gTriangleVertices1[] =
{
-0.5f, -0.5f, -0.5f,
0.0f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
};
const GLfloat gTriangleVertices2[] =
{
-0.7f, 0.0f, 0.3f,
0.5f, 0.3f, 0.3f,
0.5f, 0.0f, 0.3f,
};
And are considered as
const GLfloat gTriangleVertices[] =
{
-0.5f, -0.5f, -0.5f,
0.0f, 0.5f, -0.5f,
0.5f, -0.5f, -0.5f,
-0.7f, 0.0f, 0.3f,
0.5f, 0.3f, 0.3f,
0.5f, 0.0f, 0.3f,
};
So the overflow actually jumps to the part of the memory where the second vertex data is. Make no mistake of thinking what you did is correct by mistake. It is not correct and this can break on different version or for any other reason, platform, device... So fix it.
The second question:
The blending and depth buffer do not go together. You need to avoid this combination. I will not explain the reasons here (search a bit for it) but the result is undefined and you may not use that. Use one or the other.
If you will search the web on how to tackle this you will not find a correct answer for your situation though as it is very unique. What I suggest for the general solution for your case is to add a stencil buffer.
The first call should also draw to the stencil buffer which is then used to draw in the last call as well. So the last call should have a depth test disabled but stencil enabled. But doing that you should most likely simply remove the depth and use stencil only.
I am sure there are other possible solutions to this by using an alpha channel as well but in any case remember that depth buffer combined with blending is strictly forbidden and the behavior is undefined. It means it can even vary between the GPUs.

Related

XMVector3Project unexpected behaviour

I'm trying to figure out World space to Screen space transform. As I understand, in D3D11, function XMVector3Project should handle this. However, when I use it like this:
XMVECTOR eye = XMVectorSet(10000, 0.0f, 1.5f, 0.0f);
XMVECTOR at = XMVectorSet(10000, 0.0f, 0.0f, 0.0f);
XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
auto viewMatrix = XMMatrixTranspose(XMMatrixLookAtRH(eye2, at2, up2));
XMVECTOR vec = XMVector3Project(XMVectorSet(0.0, 0.0, 0.0, 1.0f), 0, 0, 480, 800, 0, 1, XMMatrixIdentity(), viewMatrix, XMMatrixIdentity());
it returns point (240, 480). I don't understand how that's possible, cause even with no Projection matrix, when I set view matrix to show point (1000, 1000, x), Point (0,0,0) shouldn't show on screen at all.
That's just my view, probably wrong, so I would like to know how is that intended behaviour?
I think the problem here is your use of XMMatrixTranspose. DirectXMath (aka XNAMath version 3 aka xboxmath) functions are all written assuming you have row-major matrices either left-handed or right-handed. By applying the XMMatrixTranspose to the lookat matrix, you are making it column-major. While this is commonly done as a last step before setting it into a Constant Buffer for consumption by HLSL (see MSDN DirectXMath Programmer's Guide and MSDN HLSL docs for details), the result doesn't make sense to use this way with XMVector3Project.
BTW, I'm assuming your use of XMVectorSet here is just for testing, but the efficient way to code a constant XMVECTOR is using XMVECTORF32.
static const XMVECTORF32 eye = { 10000, 0.0f, 1.5f, 0.0f };
static const XMVECTORF32 at = { 10000, 0.0f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 1.0f, 0.0f, 0.0f, 0.0f };

OpenGL ES 2.0 drawing more than one texture

My question is quite trivial I believe, I'm using OpenGL ES 2.0 to draw a simple 2D scene.
I have a background texture that stretches the whole screen and another texture of a flower (or shel I say sprite?) that drawn at a specific location on screen.
So the trivial why i can think of doing it is to call glDrawArrays twice, one with the vertices of the background texture, and another one with the vertices of the flower texture.
Is that the right way? if so, is that mean that for 10 flowers i'll need to call glDrawArrays 10 times?
And what about blending? what if i want to blend the flower with the background, i need both the background and flower pixel colors and that may be a problem with two draws no?
Or is it possible to do it in one draw? if so how can I create a shader that knows if it now processing the background texture vertex or the flower texture vertex?
Or is it possible to do it in one draw?     
The problem with one draw is that the shader needs to know if the current vertex is a background vertex (than use the background texture color) or a flower vertex( than use the flower texture color), and I don't know how to do it.  
Here is how I use one draw call to draw the background image stretches the whole screen and the flower is half size centered.
- (void)renderOnce {
//... set program, clear color..
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, backgroundTexture);
glUniform1i(backgroundTextureUniform, 2);
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, flowerTexture);
glUniform1i(flowerTextureUniform, 3);
static const GLfloat allVertices[] = {
-1.0f, -1.0f, // background texture coordinates
1.0f, -1.0f, // to draw in whole screen
-1.0f, 1.0f, //
1.0f, 1.0f,
-0.5f, -0.5f, // flower texture coordinates
0.5f, -0.5f, // to draw half screen size
-0.5f, 0.5f, // and centered
0.5f, 0.5f, //
};
// both background and flower texture coords use the whole texture
static const GLfloat backgroundTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat flowerTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
glVertexAttribPointer(positionAttribute, 2, GL_FLOAT, 0, 0, allVertices);
glVertexAttribPointer(backgroundTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, backgroundTextureCoordinates);
glVertexAttribPointer(flowerTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, flowerTextureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
You have two choices:
Call glDrawArrays for every texture you want to draw, this will be slow if you have more than 10-20 textures, to speed it up thought you can use hardware vbo
Batch the vertices(vertices,texture coords,color) of all the sprites you want to draw in one array and use a texture atlas(a texture that has all of the pictures you want to draw in it) and draw all this with one glDrawArrays
The second way is obviously the better and the right one.To get an idea of how to do it ,look at my awnser here

when is the indices of vertices determined to be used in gl_VertexID

I am trying to understand the behavior of gl_vertexID in vertex shaders. For that I am trying to render 2 squares using two glDrawArrays calls one after another. And want to apply red color to only one square using gl_VertexID in vertex as :
out vec4 color;
in vec4 tdk_Vertex;
void main(void)
{
if(gl_VertexID < 4)
{
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
}
else
{
color = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
gl_Position = tdk_Vertex;
}
Passing color to fragment shaders.
Square coordinates as :
static GLfloat vertices[] =
{ -0.75f, 0.25f, 0.0f, 1.0f,
-0.75f, 0.5f, 0.0, 1.0f,
-0.25f, 0.5f, 0.0f, 1.0f,
-0.25f, 0.25f, 0.0f, 1.0f,
0.25f, 0.25f, 0.0f, 1.0f,
0.25f, 0.5f, 0.0f, 1.0f,
0.75f, 0.5f, 0.0f, 1.0f,
0.75f, 0.25f, 0.0f, 1.0f};
Making draw calls as :
for(int i=0; i<8; i+=4)
{
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
}
Using Nvidia card, and calling two glDrawArrays calls is displaying the expected result i.e rendering red color to one square and white to other.
Thus, want to know is this correct behaviour or gl_VertexID indices should generated during glDrawArrays call so that both squares have same red color?
I am using 2 glDrawArrays calls , so my understanding is that both squares should be red according to specification :
http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_VertexID.xml
Want to test it for glsl 300 es.
In the case of glDrawArrays, the gl_VertexID is intended to be the index of the vertex within the buffer. Your first draw call renders the indices on the range [0, 4), so those are the values that gl_VertexID will take. Your second draw call renders the indices on the range [4, 8), and those are the values that gl_VertexID will take.

How to Crop and Scale a Texture in OpenGL

I have an input texture that is 852x640 and an output texture that is 612x612. I am passing the input through a shader and want the output to be scaled and cropped properly. I'm having trouble getting the squareCoordinates, textureCoordinates and viewPorts to work properly together.
I do not want to just crop, I want to scale it as well to get the most amount of the image as possible. If I were using Photoshop I'd do this in two steps (in OpenGL I'm trying to do this in one step):
Scale the image to 612x814
Crop off the excess 101px at each side
I'm using standard square vertices and texture vertices:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat squareTextureVertices[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
}
I don't exactly know what the viewPort should be.
Viewport would be 612x612 pixels.
To scale and crop original quad the easiest way would be to set vertices to cover 612x612 rect (in your case we leave squareVertices unchanged), but set texture coordinates so left and right sides are cropped out:
static const GLfloat squareTextureVertices[] = {
(852.0f-640.0f)/852.0f*0.5f, 0.0f,
1.0f - (852.0f-640.0f)/852.0f*0.5f, 0.0f,
(852.0f-640.0f)/852.0f*0.5f, 1.0f,
1.0f - (852.0f-640.0f)/852.0f*0.5f, 1.0f
}

Rendering 2D sprites in a 3D world?

How do I render 2D sprites in OpenGL given that I have a png of the sprite? See images as an example of the effect I'd like to achieve. Also I would like to overlay weapons on the screen like the rifle in the bottom image. Does anyone know how I would achieve the two effects? Any help is greatly appreciated.
In 3D terms, this is called a "billboard". A billboard is completely flat 2D plane with a texture on it and it always faces the camera.
See here for a pure OpenGL implementation: http://nehe.gamedev.net/data/articles/article.asp?article=19
Just about any 3D engine should be able to do them by default. Ogre3D can do it, for instance.
a) For the first case:
That's not really 2D sprites. Those men seem to be rendered as single quads with a texture with some kind of transparency (either alpha test or alpha blending).
Anyway, even a single quad can still be considered a 3D object, so for such situation you might want to treat it as one: track its translation and rotation and render it in the same way as any other 3D object.
b) For the second case:
If you want the gun (a 2D picture, I pressume) to be rendered in the same place without any perspective transformation, then you can use the same technique one uses for drawing the GUI (etc). Have a look at my post here:
2D overlay on a 3D scene
For the overlaying of the 2D weapon, you can use glOrtho for the camera view.
You create a 3d quad and map the .png-based texture to it. You can make the quad face whatever direction you want, as in the first picture, or make it always facing the camera (like a billboard, mentioned by Svenstaro) as in your second picture. Though, to be fair, I am sure that second picture just blitted the image (with some scaling) directly in the software-created framebuffer (that looks like Wolf3d tech, software rendering).
Take a look at OpenGL Point Sprites:
http://www.informit.com/articles/article.aspx?p=770639&seqNum=7
Especially useful for partical systems but may do the trick for your purposes.
Check this tutorial about billboards. I think you'll find useful.
http://www.lighthouse3d.com/opengl/billboarding/
opengl-tutorial has:
a tutorial http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/billboards/ focused on energy bars
OpenGL 3.3+ WTF licensed code that just works: https://github.com/opengl-tutorials/ogl/blob/71cad106cefef671907ba7791b28b19fa2cc034d/tutorial18_billboards_and_particles/tutorial18_billboards.cpp
Screenshot:
Code:
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <algorithm>
#include <GL/glew.h>
#include <glfw3.h>
GLFWwindow* window;
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/norm.hpp>
using namespace glm;
#include <common/shader.hpp>
#include <common/texture.hpp>
#include <common/controls.hpp>
#define DRAW_CUBE // Comment or uncomment this to simplify the code
int main( void )
{
// Initialise GLFW
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
getchar();
return -1;
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_RESIZABLE,GL_FALSE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // To make MacOS happy; should not be needed
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Open a window and create its OpenGL context
window = glfwCreateWindow( 1024, 768, "Tutorial 18 - Billboards", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
// Hide the mouse and enable unlimited mouvement
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
// Set the mouse at the center of the screen
glfwPollEvents();
glfwSetCursorPos(window, 1024/2, 768/2);
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
// Create and compile our GLSL program from the shaders
GLuint programID = LoadShaders( "Billboard.vertexshader", "Billboard.fragmentshader" );
// Vertex shader
GLuint CameraRight_worldspace_ID = glGetUniformLocation(programID, "CameraRight_worldspace");
GLuint CameraUp_worldspace_ID = glGetUniformLocation(programID, "CameraUp_worldspace");
GLuint ViewProjMatrixID = glGetUniformLocation(programID, "VP");
GLuint BillboardPosID = glGetUniformLocation(programID, "BillboardPos");
GLuint BillboardSizeID = glGetUniformLocation(programID, "BillboardSize");
GLuint LifeLevelID = glGetUniformLocation(programID, "LifeLevel");
GLuint TextureID = glGetUniformLocation(programID, "myTextureSampler");
GLuint Texture = loadDDS("ExampleBillboard.DDS");
// The VBO containing the 4 vertices of the particles.
static const GLfloat g_vertex_buffer_data[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
};
GLuint billboard_vertex_buffer;
glGenBuffers(1, &billboard_vertex_buffer);
glBindBuffer(GL_ARRAY_BUFFER, billboard_vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_DYNAMIC_DRAW);
#ifdef DRAW_CUBE
// Everything here comes from Tutorial 4
GLuint cubeProgramID = LoadShaders( "../tutorial04_colored_cube/TransformVertexShader.vertexshader", "../tutorial04_colored_cube/ColorFragmentShader.fragmentshader" );
GLuint cubeMatrixID = glGetUniformLocation(cubeProgramID, "MVP");
static const GLfloat g_cube_vertex_buffer_data[] = { -1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f,-1.0f, 1.0f, 1.0f,1.0f, 1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f,-1.0f, 1.0f,-1.0f,-1.0f,-1.0f,1.0f,-1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f,-1.0f,-1.0f, 1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f, 1.0f,1.0f, 1.0f, 1.0f,1.0f,-1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,-1.0f,-1.0f,1.0f, 1.0f, 1.0f,1.0f,-1.0f, 1.0f,1.0f, 1.0f, 1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f, 1.0f,1.0f, 1.0f, 1.0f,-1.0f, 1.0f, 1.0f,1.0f,-1.0f, 1.0f};
static const GLfloat g_cube_color_buffer_data[] = { 0.583f, 0.771f, 0.014f,0.609f, 0.115f, 0.436f,0.327f, 0.483f, 0.844f,0.822f, 0.569f, 0.201f,0.435f, 0.602f, 0.223f,0.310f, 0.747f, 0.185f,0.597f, 0.770f, 0.761f,0.559f, 0.436f, 0.730f,0.359f, 0.583f, 0.152f,0.483f, 0.596f, 0.789f,0.559f, 0.861f, 0.639f,0.195f, 0.548f, 0.859f,0.014f, 0.184f, 0.576f,0.771f, 0.328f, 0.970f,0.406f, 0.615f, 0.116f,0.676f, 0.977f, 0.133f,0.971f, 0.572f, 0.833f,0.140f, 0.616f, 0.489f,0.997f, 0.513f, 0.064f,0.945f, 0.719f, 0.592f,0.543f, 0.021f, 0.978f,0.279f, 0.317f, 0.505f,0.167f, 0.620f, 0.077f,0.347f, 0.857f, 0.137f,0.055f, 0.953f, 0.042f,0.714f, 0.505f, 0.345f,0.783f, 0.290f, 0.734f,0.722f, 0.645f, 0.174f,0.302f, 0.455f, 0.848f,0.225f, 0.587f, 0.040f,0.517f, 0.713f, 0.338f,0.053f, 0.959f, 0.120f,0.393f, 0.621f, 0.362f,0.673f, 0.211f, 0.457f,0.820f, 0.883f, 0.371f,0.982f, 0.099f, 0.879f};
GLuint cubevertexbuffer;
glGenBuffers(1, &cubevertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, cubevertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_cube_vertex_buffer_data), g_cube_vertex_buffer_data, GL_DYNAMIC_DRAW);
GLuint cubecolorbuffer;
glGenBuffers(1, &cubecolorbuffer);
glBindBuffer(GL_ARRAY_BUFFER, cubecolorbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_cube_color_buffer_data), g_cube_color_buffer_data, GL_DYNAMIC_DRAW);
#endif
double lastTime = glfwGetTime();
do
{
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
double currentTime = glfwGetTime();
double delta = currentTime - lastTime;
lastTime = currentTime;
computeMatricesFromInputs();
glm::mat4 ProjectionMatrix = getProjectionMatrix();
glm::mat4 ViewMatrix = getViewMatrix();
#ifdef DRAW_CUBE
// Again : this is just Tutorial 4 !
glDisable(GL_BLEND);
glUseProgram(cubeProgramID);
glm::mat4 cubeModelMatrix(1.0f);
cubeModelMatrix = glm::scale(cubeModelMatrix, glm::vec3(0.2f, 0.2f, 0.2f));
glm::mat4 cubeMVP = ProjectionMatrix * ViewMatrix * cubeModelMatrix;
glUniformMatrix4fv(cubeMatrixID, 1, GL_FALSE, &cubeMVP[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, cubevertexbuffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, cubecolorbuffer);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glDrawArrays(GL_TRIANGLES, 0, 12*3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
#endif
// We will need the camera's position in order to sort the particles
// w.r.t the camera's distance.
// There should be a getCameraPosition() function in common/controls.cpp,
// but this works too.
glm::vec3 CameraPosition(glm::inverse(ViewMatrix)[3]);
glm::mat4 ViewProjectionMatrix = ProjectionMatrix * ViewMatrix;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Texture);
// Set our "myTextureSampler" sampler to user Texture Unit 0
glUniform1i(TextureID, 0);
// This is the only interesting part of the tutorial.
// This is equivalent to mlutiplying (1,0,0) and (0,1,0) by inverse(ViewMatrix).
// ViewMatrix is orthogonal (it was made this way),
// so its inverse is also its transpose,
// and transposing a matrix is "free" (inversing is slooow)
glUniform3f(CameraRight_worldspace_ID, ViewMatrix[0][0], ViewMatrix[1][0], ViewMatrix[2][0]);
glUniform3f(CameraUp_worldspace_ID , ViewMatrix[0][1], ViewMatrix[1][1], ViewMatrix[2][1]);
glUniform3f(BillboardPosID, 0.0f, 0.5f, 0.0f); // The billboard will be just above the cube
glUniform2f(BillboardSizeID, 1.0f, 0.125f); // and 1m*12cm, because it matches its 256*32 resolution =)
// Generate some fake life level and send it to glsl
float LifeLevel = sin(currentTime)*0.1f + 0.7f;
glUniform1f(LifeLevelID, LifeLevel);
glUniformMatrix4fv(ViewProjMatrixID, 1, GL_FALSE, &ViewProjectionMatrix[0][0]);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, billboard_vertex_buffer);
glVertexAttribPointer(
0, // attribute. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the billboard !
// This draws a triangle_strip which looks like a quad.
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
// Cleanup VBO and shader
glDeleteBuffers(1, &billboard_vertex_buffer);
glDeleteProgram(programID);
glDeleteTextures(1, &TextureID);
glDeleteVertexArrays(1, &VertexArrayID);
#ifdef DRAW_CUBE
glDeleteProgram(cubeProgramID);
glDeleteVertexArrays(1, &cubevertexbuffer);
glDeleteVertexArrays(1, &cubecolorbuffer);
#endif
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
}
Tested on Ubuntu 15.10.
Axis oriented version of this question: https://gamedev.stackexchange.com/questions/35946/how-do-i-implement-camera-axis-aligned-billboards Here we have done a viewpoint oriented billboard.

Resources