Loading vertices in glBufferData with OpenGL ES 2.0 - opengl-es

Im working with a parser that handles Wavefront OBJ 3D object files and im not quite sure if im doing the loading correctly into OpenGL.
So basically what i do is that i read my Wavefront OBJ files and parse all data.
Normally in OpenGL ES 1.1 i do following when loading data:
glBegin(GL_TRIANGLES);
glNormal3f(normals[faces[i].normal[0]].v[0], normals[faces[i].normal[0]].v[1], normals[faces[i].normal[0]].v[2]);
glVertex3f(vertices[faces[i].vertex[0]].v[0], vertices[faces[i].vertex[0]].v[1], vertices[faces[i].vertex[0]].v[2]);
glNormal3f(normals[faces[i].normal[1]].v[0], normals[faces[i].normal[1]].v[1], normals[faces[i].normal[1]].v[2]);
glVertex3f(vertices[faces[i].vertex[1]].v[0], vertices[faces[i].vertex[1]].v[1], vertices[faces[i].vertex[1]].v[2]);
glNormal3f(normals[faces[i].normal[2]].v[0], normals[faces[i].normal[2]].v[1], normals[faces[i].normal[2]].v[2]);
glVertex3f(vertices[faces[i].vertex[2]].v[0], vertices[faces[i].vertex[2]].v[1], vertices[faces[i].vertex[2]].v[2]);
glEnd();
As for OpenGL ES 2.0 i have tried following for the vertices without any luck:
glBufferData(GL_ARRAY_BUFFER, obj.vertices.size()*sizeof(float), &(obj.vertices[0].v), GL_STATIC_DRAW);
My datastructure:
struct vertex {
std::vector<float> v;
}
The v vector is created for each vertex offcourse, with {x,y,z}.
class waveObj {
public:
std::vector<vertex> vertices;
std::vector<vertex> texcoords;
std::vector<vertex> normals;
std::vector<vertex> parameters;
std::vector<face> faces;
}
struct face {
std::vector<int> vertex;
std::vector<int> texture;
std::vector<int> normal;
};
How can i load my data as i done with OpenGL ES 1.1 in 2.0?
Also is it even possible to load a vector (v), rather than the positions seperate (float x,y,z)?

There are several options:
Create separate VBO for each of you data: one for position, one for normals, etc
Create single VBO with interleaved data - but this would require a bit of code changes in your code.
For start I suggest using one buffer for one vertex attrib + index buffer:
one thing with the index buffer:
you have separate indices for pos, normal, texture (you take those values directly from OBJ file), but if you ant to draw geometry using IBO (index buffer object) you need to create sinlge index.
here is some of my code to do that:
map<FaceIndex, GLushort, FaceIndexComparator>::iterator
cacheIndex = cache.find(fi);
if (cacheIndex != cache.end()) {
node->mIndices.push_back(cacheIndex->second);
}
else {
node->mPositions.push_back(positions[fi.v]);
node->mNormals.push_back(normals[fi.n]);
node->mTexCoords.push_back(texCoords[fi.t]);
node->mIndices.push_back((unsigned int)node->mPositions.size()-1);
cache[fi] = ((unsigned int)node->mPositions.size()-1);
}
What it does:
it has a vector for each pos, nomal and tex cood... but when there is a "f" flag in OBJ file I check if there is a triple in my cache.
if there is such triple I put that index in my node's indices
if not I need to create new index

Related

Can I make a GLSL struct have std140 layout?

I just tried to do this
C++
struct PointLight
{
glm::vec4 position;
glm::vec4 colour;
};
std::vector <PointLight> lights_array;
GLSL 320 ES:
layout (std140) struct PointLight
{
vec4 position;
vec4 colour;
};
layout (std140) buffer Lights
{
int count;
PointLight data [];
}
b_lights;
The compile error surprised me:
error C7600: no value specified for layout qualifier 'std140'
I can't find a straight answer but I get the impression that I can't specify std140 for struct definitions. Is this so? Or how can I spell it?
If not, then how am I able to guarantee that I can send lights_array to glBufferData so that it has the correct layout in the shader's b_lights.data array?
In other words, why is std140 required for the buffer but not for the struct?
Interface blocks have layouts, not structs. The layout applies to how the block lays out its elements, recursively, through their entire contents.
So you don't need to apply an interface block layout to a random struct.

How to sample a SRV when enable msaa x4?DirectX11

I'm learning dx11 from Introduction_to_3D_Game_Programming_with_Directx_11.
Everything is ok without msaa. When I enable it, my .fx and C++ codes will not work well.
Do someone experienced it too and how to deal with this situation?
Before Codes:
Texture2D gTexture1;
float4 BLEND_PS(VertexOut_SV pin) :SV_TARGET
{
float4 texColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
texColor = gTexture1.Sample(SamAnisotropic, pin.Tex);
return texColor;
}
because I can't bind a texture created with msaa to a texture2D,so I take msaa ON whenever.
After codes:
Texture2DMS<float4> gTexture1;
float4 BLEND_PS(VertexOut_SV pin) :SV_TARGET
{
float4 texColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
texColor = gTexture1.Load(int2(pin.Tex.x*1400, pin.Tex.y*900), 0);
return texColor;
}
But the texColor is not right pixel I want.How to sample an SRV with msaa?
How to convert an UAV without msaa into a SRV with msaa?
And how to enable and disable msaa in C++ game codes with corresponding hlsl codes?
Do I have to keep different hlsl for each other?
For 'standard' MSAA use, you do the following:
When creating your swap chain and render traget view, set DXGI_SWAP_CHAIN_DESC.SampleDesc.Count or DXGI_SWAP_CHAIN_DESC1.SampleDesc.Count to 2, 4, 8, etc.
When creating your depth buffer/stencil, you need to use the same sample count for D3D11_TEXTURE2D_DESC.SampleDesc.Count.
When creating your render target view, you need to use D3D11_RTV_DIMENSION_TEXTURE2DMS (or pass nullptr for the view description so it matches the resource exactly)
When creating your depth buffer/stencil view, you need to use D3D11_DSV_DIMENSION_TEXTURE2DMS (or pass nullptr for the view description so it matches the resource exactly)
When rendering, you need to use a rasterizer state with D3D11_RASTERIZER_DESC.MultisampleEnable set to TRUE.
See also the Simple rendering tutorial for DirectX Tool Kit
Sample count
Depending on the Direct3D feature level, some MSAA sample counts are required for particular render target formats. Use can use CheckFormatSupport to verify render target format supports MSAA:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(m_backBufferFormat, &formatSupport)))
{
throw std::exception("CheckFormatSupport");
}
UINT flags = D3D11_FORMAT_SUPPORT_MULTISAMPLE_RESOLVE
| D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET;
if ( (formatSupport & flags) != flags )
{
// error
}
You then use CheckMultisampleQualityLevels to verify the sample count is supported. This code finds the highest supported MSAA level count for a particular format:
for (m_sampleCount = D3D11_MAX_MULTISAMPLE_SAMPLE_COUNT;
m_sampleCount > 1; m_sampleCount--)
{
UINT levels = 0;
if (FAILED(device->CheckMultisampleQualityLevels(m_backBufferFormat,
m_sampleCount, &levels)))
continue;
if ( levels > 0)
break;
}
if (m_sampleCount < 2)
{
// error
}
You can also validate the depth/stencil format you want to use supports D3D11_FORMAT_SUPPORT_DEPTH_STENCIL | D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET.
Flip Style modes
The technique above only works for the older "bit-blt" style flip modes DXGI_SWAP_EFFECT_DISCARD or DXGI_SWAP_EFFECT_SEQUENTIAL. For UWP and DirectX 12 you are required to use DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL or DXGI_SWAP_EFFECT_FLIP_DISCARD which will fail if you attempt to create a back buffer with a SampleCount > 1.
In this case, you create the backbuffer with a SampleCount of 1, and create your own MSAA Render Target 2D texture. You have your Render Target View point to your MSAA render target, and before you Present you call ResolveSubresource from your MSAA render target to the backbufffer. This is exactly the same thing that DXGI did for you 'behind the scenes' with the older flip models.
For gamma-correct rendering (aka when you use a backbuffer format ending in _SRGB), the newer flip styles require that you use the non-SRGB equivalent for the backbuffer format or the swapchain create will fail. You set the SRGB format on the render target view instead.

Convert 3D Studio Max model (3DS/MAX) to QCAR SDK for iOS / OpenGL ES compatible format?

Quancomm Augmented Reality (QCAR) for iOS, which make use of OpenGL ES, to display 3D model. It reads several files:
vertices, texture coordinates, indices & normals list (in format of ONE .h header file, e.g. Teapot.h)
texture file (in PNG format)
shader file (in FSH and VSH format)
My question is, how to convert a 3D Studio Max (3ds/max) file to the vertices, texture coordinates, indices & normals list? Also, during the conversion process, can the shader file be generated based on the settings in 3DS file as well?
The files are used in QCAR SDK for iOS, version 1.0.
As an example, the file content is as follow:
#ifndef _QCAR_TEAPOT_OBJECT_H_
#define _QCAR_TEAPOT_OBJECT_H_
#define NUM_TEAPOT_OBJECT_VERTEX 824
#define NUM_TEAPOT_OBJECT_INDEX 1024 * 3
static const float teapotVertices[NUM_TEAPOT_OBJECT_VERTEX * 3] =
{
// vertices here
};
static const float teapotTexCoords[NUM_TEAPOT_OBJECT_VERTEX * 2] =
{
// texture coordinates here
};
static const float teapotNormals[NUM_TEAPOT_OBJECT_VERTEX * 3] =
{
// normals here
};
static const unsigned short teapotIndices[NUM_TEAPOT_OBJECT_INDEX] =
{
// indices here
};
#endif
i believe that what you are looking for, is here: 3DS to Header utility.
In answer to your second question, 3DS material settings could be in theory converted to shaders if someone wrote all possible shaders that 3DS can use and programmed some code to select and configure the shader, based on material properties. But no, there is no direct / easy way how to do that. Although, if i recall correctly, 3DS doesn't have too many material choices so there might be not too many shaders required in order to do this.

How do I replace glBegin() and related functions in OpenGL ES 2.0?

I have OpenGL code like the following that I'd like to port to OpenGL ES 2.0:
for (surfnum=0;surfnum < surftotal;surfnum++){
for (i=0;i<triNum[surfnum];i++){
glBegin(GL_POLYGON);
glNormal3fv(triArray[surfnum][i].normpt1);
glVertex3fv(triArray[surfnum][i].pt1);
glNormal3fv(triArray[surfnum][i].normpt2);
glVertex3fv(triArray[surfnum][i].pt2);
glNormal3fv(triArray[surfnum][i].normpt3);
glVertex3fv(triArray[surfnum][i].pt3);
glEnd();
glFlush();
}
}
OpenGL ES 2.0 lacks GL_POLYGON, glNormal3fv, glVertex3fv, glEnd, glBegin, etc., so how do I replace these functions?
P.S.: I am doing this in Ubuntu 10.10 through an emulator.
You use Vertex Buffer Objects. Tutorial at NeHe: http://nehe.gamedev.net/tutorial/vertex_buffer_objects/22002/
The tutorial (the bulk text) is written for Windows. OpenGL-ES 2 on Android differs by that you don't have to load extensions manually and are given a properly prepared OpenGL context by the egl... functions.
Another readable tutorial is
http://www.songho.ca/opengl/gl_vbo.html
GL_POLYGONS have been abandoned from OpenGL-3 and -ES since they're cumbersome to work with and almost never used. Also GL_POLYGON can be perfectly replaced by GL_TRIANGLE_FAN. Or you do the clean thing and tesselate polygonal geometry into triangles yourself.
A basic example, to draw a triangle in OpenGL ES:
GLfloat glverts[9];
glVertexPointer(3, GL_FLOAT, 0, glverts);
glEnableClientState(GL_VERTEX_ARRAY);
//fill in vertex positions with your data
for (int i = 0; i < 3; i++) {
glverts[i*3] = ...;
glverts[i*3+1] = ...;
glverts[i*3+2] = ...;
}
glDrawArrays(GL_TRIANGLE_FAN, 0, 3);
EDIT: sorry, this is for OpenGL ES 1.1, not 2.0

How to change the content of vertexbuffer used in glDrawArrays method in opengles

I have some triangle polygons and drawing them in a traditional way:
(android-java code)
gl.glDrawArrays(GL10.GL_TRIANGLES, i, j);
I want to update the vertex coordinates of the triangles. All of the tutorials I've found use an initial vertex data then only apply transformations to them. I need to change each vertex coordinate independently.
I change the content of the array which is used to create the related vertexbuffer but it doesn't make any change on the screen. Rebuilding the vertexbuffer on eachframe doesn't seem to be right way I guess.
Can you point out any example source code at least, if you know any?
You seem to be looking for glBufferSubData. Basically, you update the contents of your array just as you've described, then you call glBufferSubData to update the vertex buffer object with the new values.
This assumes you're modifying only a relatively small subset of the data. If you're modifying most of the data, it's generally better to just call glBufferData again instead.
I found out that most part of my problem is java-android related. #jerry proposed the right solution to the main idea in my question but I'll adress the java-android related parts of the problems:
First; method signatures in Renderer interface of Android offers you a GL10 object as parameter but some of the methods you need are in opengl-es1.1 so you need to cast gl object.
public void onDrawFrame(GL10 gl) {
GL11 gl11 = (GL11)gl;
Ofcourse if the device doesn't support opengl-es1.1 your code won't work but that's out of our topic.
My other problem was about the differences in java implementation of opengl-es. In java you create a related Buffer object, which is a native java class in java.nio package, and fill inside of this buffer with an array you specified.
public static FloatBuffer createFloatBuffer(float[] array){
ByteBuffer byteBuf = ByteBuffer.allocateDirect(array.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
FloatBuffer fbuf = byteBuf.asFloatBuffer();
fbuf.put(array);
fbuf.position(0);
return fbuf;
}
After this part you have nothing to do with the array object. You need to update the buffer object. You can update the content of buffer by using put(index, value) method of related buffer class.
Here is the draw() method of an object we want to draw. vertexBuffer is a FloatBuffer object. You can get the idea I think from this snippet.
public void draw(GL11 gl) {
vertexBuffer.put(index, floatValue);
gl.glBufferSubData(GL11.GL_ARRAY_BUFFER, 0, vertexBuffer.capacity(),vertexBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
You can correct me if there are misleading points..
I think what you want to do is create multiple vertex buffers on initialization and then just bind to different ones using glBindBuffer()

Resources