Obtain indices from .obj - opengl-es

I dont know opengl-es. but I must use features of .obj 3d model in my android app.
In .obj file, I can find Vertices, Texcoords and Normals. but there is no Indices and instead there are face elements.
Can anyone clearly explain me how to obtain indices from .obj file?

Possibly a little hard to write clearly because what OBJ considers a vertex is not what OpenGL considers a vertex. Let's find out...
An OBJ file establishes lists of obj-vertices (v), texture coordinates (vt), normals (n). You probably don't ever want to hand these to OpenGL (but skip to the end for the caveat). They're just a way for your loading code to establish the meaning of v1, vt3, etc.
The only place that openGL-vertices are specified is within the f statement. E.g. v1/vt1/vn1 means "the OpenGL vertex with location, texture coordinate and normal as specified back in the list".
So a workable solution to loading is, in pseudocode:
instantiate an empty hash map from v/vt/vn triples to opengl-vertex indices, an empty opengl-vertex list, and an empty list of indices for later supplication to glDrawElements;
for each triple in the OBJ file:
look into the hash map to determine whether it is already in the opengl-vertex list and, if so, get the index and add it to your elements list;
if not then assign the next available index to the triple (so, this is just an incrementing number), put that into the hash map and the elements list, combine the triple and insert it into the opengl-vertex list.
You can try to do better than that by attempting to minimise the potential highly random access it implies to your opengl-vertex list at drawing time, but don't prematurely optimise.
Caveat:
If your GPU supports vertex texture fetch (i.e. texture sampling within the vertex shader) then you could just supply the triples directly to OpenGL, having accumulated the obj-vertices, etc, into texture maps, and do the indirect lookup in your vertex shader. With vertex texture fetch, textures really just become random access 2d arrays. However, many Android GPUs don't support vertex texture fetch (even if they support ES 3 which ostensibly makes it a requirement, as it allows an implementation to specify that it supports a maximum of zero samplers).

Related

Why Directx11 doesn't support multiple index buffers in IASetIndexBuffer

You can set multiple vertex buffers with IASetVertexBuffers,
but there is no plural version of IASetIndexBuffer.
What is the point of creating (non-interleaved) multiple vertex buffer datas if you wont be able to refer them with individual index buffers
(assume i have a struct called vector3 with 3 floats x,y,z)
let say i have a model of human with 250.000 vertices and 1.000.000 triangles;
i will create a vertex buffer with size of 250.000 * sizeof(vector3)
for vertex LOCATIONS and also,
i will create another vertex buffer with size of 1.000.000 * 3 *
sizeof(vector3) for vertex NORMALS (and propably another for diffuse
texture)
i can set these vertex buffers like:
ID3D11Buffer* vbs[2] = { meshHandle->VertexBuffer_Position, meshHandle->VertexBuffer_Normal };
uint strides[] = { Vector3f_size, Vector3f_size };
uint offsets[] = { 0, 0 };
ImmediateContext->IASetVertexBuffers(0, 2, vbs, strides, offsets);
how can i set seperated index buffers for these vertex datas if IASetIndexBuffer only supports 1 index buffer
and also (i know there are techniques for decals like creating extra triangles from the original model but)
let say i want to render a small texture like a SCAR on this human model's face (let say forehead), and this scar will only spread through 4 triangles,
is it possible creating a uv buffer (with only 4 triangles) and creating 3 different index buffers for locations, normals and UVs for only 4 triangles but using the same original vertex buffers (same data from full human model). i dont want to create tonnes of uv data which will never be rendered beside characters forehead (and i dont want to re-use, re-create vertex position datas for this secondary texture layers (decals))
EDIT:
i realized i didnt properly ask a question, so my question is:
did i misunderstand non-interleaved model structure (is it being used
for some other reason instead having non-aligned vertex components)?
or am i approaching non-interleaved structure wrong (is there a way
defining multiple non-aligned vertex buffers and drawing them with
only one index buffer)?
The reason you can't have more than on Index buffer bound at a time is because you need to specify a Fixed number of primitives when you make a call to DrawIndexed.
So in your example, if you have one index buffer with 3000 primitives and another with 12000 primitives, the pipeline would have no idea how to match the first set to the second set.
If is generally normal that some vertex data (mostly position) eventually requires to be duplicated across your vertex buffer, since your buffers requires to be the same size.
Index buffer works as a "lookup table", so your data across vertex buffers need to be consistent.
Non interleaved model structure has many advantages:
First using separate buffers can lead to better performances if you need to draw the models several times and some draws do not require attributes.
For example, when you render a shadow map, you will need to access only positions, so in interleaved mode, you still need to bind a large data structure and access elements in a non contiguous way (Input Assembler does that). In case of non interleaved data, Position will be contiguous in memory so fetch will be much faster.
Also non interleaved allows to more easily do some processing on some attributes, it is common nowadays to perform some displacement or skinning in a compute shader, so in that case you can also easily create another Position+Normal buffer, perform your skinning on those and attach them in the pipeline once processed (And you can keep UV buffer intact).
If you want to draw non aligned Vertex buffers, you could use Structured Buffers instead (and use SV_VertexID and some custom lookup tables in your shader code).

CGAL - Preserve attributes on vertex using corefine boolean operations

I'm relatively new to CGAL, so please pardon me if there is an obvious fix to this that I'm missing.
I was following this example on preserving attributes on facets of a polyhedral mesh after a corefine+boolean operation:
https://github.com/CGAL/cgal/blob/master/Polygon_mesh_processing/examples/Polygon_mesh_processing/corefinement_mesh_union_with_attributes.cpp
I wanted to know if it was possible to construct a Visitor struct which similarly operates on vertices of a polyhedral mesh. Ideally, I'd like to interpolate property values (a vector of doubles) from the original mesh onto the new boolean output vertices, but I can settle for imputing nearest neighbor values.
The difficulty I was running into was that the after_subface_created and after_face_copy functions overloaded within Visitor operate before the halfedge structure is set for the target face and hence, I'm not sure how to access the vertices of the target face. Is there a way to use the Visitor structure within corefinement to do this ?
In an older version of the code I used to have a visitor handling vertex creation/copy but it has not been backported (due to lack of time). What you are trying to do is a good workaround but you should use the visitor to collect the information (say fill a map [input face] -> std::vector<output face>) and process that map once the algorithm is done.

Efficiently retrieve Z ordered objects in viewport in 2D game

Imagine a 2D game with a large play area, say 10000x10000 pixels. Now imagine there are thousands of objects spread across this area. All the objects are in a Z order list, so every object has a well-defined position relative to every other object even if they are far away.
Suppose there is a viewport in to this play area showing say a 500x500 area of this play area. Obviously if the algorithm is "for each object in Z order list, if inside viewport, render it" then you waste a lot of time iterating all the thousands of objects far outside the viewport. A better way would be to maintain a Z-ordered list of objects near or inside the viewport.
If both the objects and the viewport are moving, what is an efficient way of maintaining a Z-ordered list of objects which are candidates to draw? This is for a general-purpose game engine so there are not many other assumptions or details you can add in to take advantage of: the problem is pretty much just that.
You do not need to keep your memory layout strongly ordered by Z. Instead you need to store your objects in a space partitionning structure that is oriented along the viewing surface.
A typical partitionning structure, is a quad-tree, in 2D. You can use a binary tree, you can use a grid, or you can use a spatial hashing scheme. You can even mix those techniques and combine them one into each other.
There is no "best", but you can put in the balance the ease of writing and maintaining the code. And the memory you have available.
Let us consider the grid, it is the most simple to implement, fastest to access, and easiest to traverse. (traversing is the fact of going to neighborhood cells)
Imagine you allow yourself 20MB of RAM usage for your grid skeleton, considering the cell content is just a small object (like a std::vector or a c# List), say 50 bytes. for a 10k pixels square surface you then have:
sqrt(20*1024*1024 / 50) = 647
you get 647 cells for one dimension, therefore 10k/647 = 15 pixels wide cells.
Still very small, so I suppose perfectly acceptable. You can adjust the numbers to get cells of 512 pixels for example. It should be a good fit when a few cells fit in the viewport.
Then, it is trivially easy to determine which cells are activated by the viewport, by dividing the top left corner by the size of the cell and flooring that result, this gives you the index directly in the cell. (provided both your viewport space and grid space start at 0,0 both. otherwise you need to offset)
Finally take the bottom right corner, determine the grid coordinate for the cell; and you can do a dual loop (x and y) between the min and max to iterate over the activated cells.
When treating a cell, you can draw the objects it contains by going through the list of objects that you would have previously stowed.
Beware of objects that spans over 2 cells or more. You need to make a choice, either store them once and only, but then your search algorithms will always need to know the size of the biggest element in the region and also search the lists of the neighbooring cells (by going as far as necessary to be sure to cover at least the size of this biggest element).
Or, you can store it multiple times (my prefered way), and simply make sure when you iterate cells, that you treat objects once only per frame. This is very easily achieved by using a frame id, in the object structure (as a mutable member).
This same logic applies for more flexible parition like binary trees.
I have implementation for both available in my engine, check the code out, it may help you get through the details: http://sourceforge.net/projects/carnage-engine/
Final words about your Z Ordering, if you had multiple memory storage for each Z, then you already did a space partitionning, simply not along the good axis.
This can be called layering.
What you can do as an optimization, is instead of storing lists of objects in your cells, you can store (ordered) maps of objects and their keys is their Z, therefore the iteration will be ordered along Z.
A typical solution to this sort of problem is to group your objects according to their approximate XY location. For example, you can bucket them into 500x500 regions, i.e. objects intersecting [0,500]x[0,500], objects intersecting [500,1000]x[0,500], etc. Very large objects might be listed in multiple buckets, but presumably there are not too many very large objects.
For each viewport, you would need to check at most 4 buckets for objects to render. You will generally only look at about as many objects as you need to render anyway, so it should be efficient. This does require a bit more work updating when you reposition objects. However, assuming that a typical object is only in one bucket, it should still be pretty fast.

Grab objects near camera

I was looking at the http://threejs.org/examples/webgl_nearestneighbour.html and had a few questions come up. I see that they use the kdtree to stick all the particles positions and then have a function to determine the nearest particle and color it. Let's say that you have a canvas with around 100 buffered geometries with around 72000 vertices / geometry. The only way I know to do this is that you get the positions of the buffered geometries and then put them into the kdtree to determine the nearest vertice and go from there. This sounds very expensive.
What other way is there to return the objects that are near the camera. Something like how THREE.LOD does it? http://threejs.org/examples/#webgl_lod It has the ability to see how far an object is and render the different levels depending on the setting you inputted.
Define "expensive". The point of the kdtree is to find nearest neighbour elements quickly, its primary focus is not on saving memory (Although it does everything inplace on a typed array, it's quit cheap in terms of memory already). If you need to save memory you maybe have to find another way.
Yet a typed array with length 21'600'000 is indeed a bit long. I highly doubt you have to have every single vertex in there. Why not have a position reference point for every geometry part? And, if you need to get the vertices associated to that point, a dictionary. Then you can call myGeometryVertices[ geometryReferencePoint ].
Three.LOD works with raycasts. If you have a (few) hundred objects that might work well. If you have hundred thousands or even millions of positions you'll get some troubles. Also if you're not using meshes; you can't raytrace e.g. a particle.
Really just build your own logic with them. None of those two provide a prebuilt perfect-for-all-cases solution.

How to implement batches using webgl?

I am working on a small game using webgl. Within this game I have some kind of forest which consists out of many (100+) tree objects. Because I only have a few different tree models, I rotate and scale these models in a different way before I display them.
At the moment I loop over all trees to display them:
for (var tree in trees) {
tree.display();
}
While the display() method of tree looks like:
display : function() { // tree
this.treeModel.setRotation(this.rotation);
this.treeModel.setScale(this.scale);
this.treeModel.setPosition(this.position);
this.treeModel.display();
}
Many tree objects share the same treeModel object, so I have to set rotation/scale/position of the model everytime before I display it. The rotation/scale/position values are different for every tree.
The display method of treeModel does all the gl stuff:
display : function() { // treeModel
// bind texture
// set uniforms for projection/modelview matrix based on rotation/scale/position
// bind buffers
// drawArrays
}
All tree models use the same shader but can use different textures.
Because a single tree model consists only out of a few triangles I want to combine all trees into one VBO and display the whole forest with one drawArrays() call.
Some assumptions to make talking about numbers easier:
There are 250 trees to display
There are 5 different tree models
Every tree model has 50 triangles
Questions I have:
At the moment I have 5 buffers that are 50 * 3 * 8 (position + normal + texCoord) * floatSize bytes large. When i want to display all trees with one vbo i would have a buffer with 250 * 50 * 3 * 8 * floatSize byte size. I think I can't use an index buffer because I have different position values for every tree (computed out of the position value of the tree model and the tree position/scale/rotation). Is this correct or is there still a way I can use index buffers to reduce the buffer size at least a bit? Maybe there are other ways to optimize this?
How to handle different textures of the tree models? I can bind all textures to different texture units but how can I decide within the shader which texture should be used for the fragment that is currently displayed?
When I want to add a new tree (or any other kind of object) to this buffer at runtime: Do I have to create a new buffer and copy the content? I think new values can't be added by using glMapBuffer. Is this correct?
Index element buffers can only reach over attributes that are equal to or below 65535 in length, so you need to use drawArrays instead. It's usually not a big loss.
You can add trees to the end of the buffers using GL.bufferSubData.
If your textures are in reasonable sized (like 128x128 or 256x256), you can probably merge them into one big texture and handle the whole thing with the UV-coords. If not, you can add another attribute saying what texture the vertex belongs to and have a condition in the vertex shader, alternatively an array of sampler2Ds (not sure it works, never tried it). Remember that conditions in shaders are pretty slow.
If you decide to stick to your current solution, make sure to sort the trees so the once using the same textures are rendered after each other - keeping state switching down is essential, always.
A few thoughts:
Once you plant a tree in your world, do you ever modify it? Will it animate at all? Or is it just static geometry? If it's truly static, you could always build a single buffer with several copies of each tree. As you append trees, first apply (in Javascript) that instance's world transform to the vertices. If using triangle strips, you can link trees together using degenerate polygons.
You could roll your own pseudo-instanced drawing:
Encode an instance ID in the array buffer. Just set this to the same value for all vertices that are part of the same tree instance. I seem to recall that you can't have non-floaty vertex attributes in ES GLSL (maybe that's a Chrome limitation), so you will need to bring it in as a float but use it as an int. Since it's coming in as a float, you will have to deal with the fact that it's interpolated across your triangle, and so the value will have minor fluctuations - but simply rounding to the nearest integer fixes that right up.
Use a separate texture (which I will call the data texture) to encode all the per-instance information. In your vertex shader, look at the instance ID of the current vertex and use that to compute a texture coordinate in the data texture. Pull out whatever you need to transform the current vertex, and apply it. I think this is called a "dependent texture read", which is generally frowned upon because it can cause performance issues, but it might help you batch your geometry, which can help solve performance issues. If you're interested, you'll have to try it and see what happens.
Hope for an extension to support real instanced drawing.
Your current approach isn't so bad. I'd say: Stick with it until you hit some wall.
50 triangles is already a reasonable batch size for a single drawElements/drawArrays call. It's not optimal, but also not so bad. So for every tree change the paramters like location, texture and maybe shape through uniforms. Then do a draw call for each tree. Also a total of 250 drawElements calls isn't so bad either.
So I'd use one single VBO that contains all the used tree geometry variants. I'd actually split up the trees into building blocks, so that I could recombine them for added variety. And for each tree set appropriate offsets into the VBO before calling drawArrays or drawElements.
Also don't forget that you can do a very cheap field of view culling of each tree.

Resources