I have many Polylgons. one contains different amount of points.
I'm drawing them using LINE_LOOP in 2d (orthographic).
Now i want to upload all of them to a single buffer.
The problem is when i'm drawing the buffer using glDrawArrays the last point of the first polygon is connected by line to the next point which is the first point of the second polygon, and so on.
I know that with glDrawElements i can send the indices and it is solving the issue. but for this, I need to send allot of data for drawing polygons with allot of points, and changing the LINE_LOOP to LINES.
Is there a way to draw only with the start and end indices of each polygon ?
For example
// My 2d polygons points are
polygons = [
0,0, 10,0, 10,5, 5,10, // polygon 1
20,20, 30,20, 30,30 // polygon 2
]
// First polygon is starting at 0, the second at index 8
// If there was function like this
draw(polygons, [0, 8]);
------ADDITION-----------------
We can do it in OpenGL by calling the glDrawMultiArray - Thanks for ratchet freak answer.
But in WebGL this function does not exists. Is there any alternative ?
you can use glMultiDrawArrays:
starts=[0,4,...]
counts=[4,3,...]
glMultiDrawArrays(GL_LINE_LOOP, starts, counts, starts.length);
otherwise with glDrawElements you can specify a primitive restart index
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(65535);
index = [0,1,2,3,65535,4,5,6,65535,...]
//bind and fill GL_ELEMENT_ARRAY_BUFFER​
glDrawElements(GL_LINE_LOOP, index.size, GL_UNSIGNED_INT, 0);
//will draw lines `0,1 1,2 2,3 3,0 4,5 5,6 6,4`
With WebGL, which AFAIK corresponds to ES 2.0 features, I don't think there's a way around making multiple draw calls. For your example, you would need two draw calls:
glDrawArrays(GL_LINE_LOOP, 0, 8);
glDrawArrays(GL_LINE_LOOP, 8, 6);
You could reduce it to a single draw call by using GL_LINES instead of GL_LINE_LOOP, but that means that your vertex array gets twice as large, because you need the start and end point of each line segment.
If you use an index buffer combined with GL_LINES, the increase is only 50%, as long as you don't have more than 64k vertices in a buffer. The vertex array itself remains at its original size, and you will need two GL_UNSIGNED_SHORT indices per vertex in addition. So that's 4 more bytes per vertex, on top of the 8 bytes (for 2 floats) you have for the vertex coordinates. This is probably your best option with the limited ES 2.0 feature set.
Related
You can set multiple vertex buffers with IASetVertexBuffers,
but there is no plural version of IASetIndexBuffer.
What is the point of creating (non-interleaved) multiple vertex buffer datas if you wont be able to refer them with individual index buffers
(assume i have a struct called vector3 with 3 floats x,y,z)
let say i have a model of human with 250.000 vertices and 1.000.000 triangles;
i will create a vertex buffer with size of 250.000 * sizeof(vector3)
for vertex LOCATIONS and also,
i will create another vertex buffer with size of 1.000.000 * 3 *
sizeof(vector3) for vertex NORMALS (and propably another for diffuse
texture)
i can set these vertex buffers like:
ID3D11Buffer* vbs[2] = { meshHandle->VertexBuffer_Position, meshHandle->VertexBuffer_Normal };
uint strides[] = { Vector3f_size, Vector3f_size };
uint offsets[] = { 0, 0 };
ImmediateContext->IASetVertexBuffers(0, 2, vbs, strides, offsets);
how can i set seperated index buffers for these vertex datas if IASetIndexBuffer only supports 1 index buffer
and also (i know there are techniques for decals like creating extra triangles from the original model but)
let say i want to render a small texture like a SCAR on this human model's face (let say forehead), and this scar will only spread through 4 triangles,
is it possible creating a uv buffer (with only 4 triangles) and creating 3 different index buffers for locations, normals and UVs for only 4 triangles but using the same original vertex buffers (same data from full human model). i dont want to create tonnes of uv data which will never be rendered beside characters forehead (and i dont want to re-use, re-create vertex position datas for this secondary texture layers (decals))
EDIT:
i realized i didnt properly ask a question, so my question is:
did i misunderstand non-interleaved model structure (is it being used
for some other reason instead having non-aligned vertex components)?
or am i approaching non-interleaved structure wrong (is there a way
defining multiple non-aligned vertex buffers and drawing them with
only one index buffer)?
The reason you can't have more than on Index buffer bound at a time is because you need to specify a Fixed number of primitives when you make a call to DrawIndexed.
So in your example, if you have one index buffer with 3000 primitives and another with 12000 primitives, the pipeline would have no idea how to match the first set to the second set.
If is generally normal that some vertex data (mostly position) eventually requires to be duplicated across your vertex buffer, since your buffers requires to be the same size.
Index buffer works as a "lookup table", so your data across vertex buffers need to be consistent.
Non interleaved model structure has many advantages:
First using separate buffers can lead to better performances if you need to draw the models several times and some draws do not require attributes.
For example, when you render a shadow map, you will need to access only positions, so in interleaved mode, you still need to bind a large data structure and access elements in a non contiguous way (Input Assembler does that). In case of non interleaved data, Position will be contiguous in memory so fetch will be much faster.
Also non interleaved allows to more easily do some processing on some attributes, it is common nowadays to perform some displacement or skinning in a compute shader, so in that case you can also easily create another Position+Normal buffer, perform your skinning on those and attach them in the pipeline once processed (And you can keep UV buffer intact).
If you want to draw non aligned Vertex buffers, you could use Structured Buffers instead (and use SV_VertexID and some custom lookup tables in your shader code).
I want to imlement simple bullet trail (in OpenGL-ES 2-3) system that will allow use different textures or materials for different trails, so it means, that that trails meant to be rendered in different draw calls and each vertex can be modified right before rendering.
Actually, I don't know, how much draw calls will be done in each update, and how much vertices will be passed to this draw call, so I trying to use single vertex buffer and single index buffer for all trails, and fill vertex buffer regions with different trails data every frame. Index buffer filled with simple (0, 1, 2, 3, 3, 4, 4, 5, 6....) values once and won't change anymore.
Could you advice some best practices, how to do this? Can I make draw calls with different render states and different vertex regions for each batch? What indices regions should I use for every draw call? Must index offset take in account vertex offset or maybe indices are applied to vertex region instead of whole buffer, so I can set index buffer offset to 0 for every draw call? Or, maybe, I make it totally wrong, and should do something else?
Thanks!
Ok, so how I made it work:
I still use single big buffer for all batches
For every batch I map just part of buffer (new part for every batch) and change data in this part.
Indices must take in account offset of this part. So, when you render part of buffer with 4th, 5th, 6th, 7th vertices, you have to use part of index buffer with {4, 5, 6, 7} data.
I have the follow code in matlab which is supposed to draw a polygon on a image (has to be a 2d image, be just a patch).
numCorners=8;
dotPos=[];
for rr=1:numCorners
dotPos(end+1)=(cos(rr/numCorners*2*pi))*100;
dotPos(end+1)=(sin(rr/numCorners*2*pi))*100;
end
BaseIm=zeros(1000,1000);
dotpos=[500,500];
imageMatrix =drawpolygon(BaseIm, dotPos, 1); or how else do draw a white polygon here?
imshow(imageMatrix);
This doesn't work as drawpolygon does not appear to exist in this way any idea how to do this?
Note that the resulting data must be an image of equal size of baseIM and must be an array of doubles (ints can be converted) as this is test data for another algorithm.
I have since found the inpolygon(xi,yi,xv,yv); function which I could combine with a for loop if I knew how to properly call it.
If you just need to plot two polygons, you can use the fill function.
t=0:2*pi;
x=cos(t)*2;
y=sin(t)*2
fill(x,y,'r')
hold on
fill(x/2,y/2,'g')
As an alternative, you can use the patch function:
figure
t=0:2*pi;
x=cos(t)*2;
y=sin(t)*2
patch(x,y,'c')
hold on
patch(x/2,y/2,'k')
Edit
The fill and patch functions allow to add polygons also over an actual image too.
% Load an image on the axes
imshow('Jupiter_New_Horizons.jpg')
hold on
% Get the axis limits (just to center the polygons
x_lim=get(gca,'xlim')
y_lim=get(gca,'ylim')
% Create the polygon's coords
t=0:2*pi;
x=cos(t)*50+x_lim(2)/2;
y=sin(t)*50+y_lim(2)/2
% Add the two polygons to the image
f1_h=fill(x,y,'r')
hold on
f1_h=fill(x/2,y/2,'g')
Hope this helps.
I'm learning webgl and I've been stuck on this problem for half a day.
I'm moving into my scene this way :
mat4.rotate(mvMatrix, degToRad(-angle), [0, 1, 0]);
mat4.translate(mvMatrix, [-currentX, 0, -currentZ]);
How am I supposed to get the coordinates (x/z) of a point in front of me (let's say 10 units) ?
Modelview matrix is the matrix the transforms from model local space to view space. Now a point "10 units in front of you" can be anywhere, depending on the space you're interested in. But say you want to know where a point 10 units in front of you was located in model space. Well, nothing as simple as that.
The point 10 units in front of the viewer is located in view space at (0,0,-10). So all you have to do now is applying the inverse transform, i.e. multiply that vector with the inverse ov mvMatrix:
mat4.inverse(mvMatrix) * vec4(0,0,-10,1);
If you wonder where the last 1 element comes from and why a 4 element vector is used for a 3 dimensional coordinate (which is something you should really wonder about) have a read about homogenous coordinates.
I was sending vertex arrays (of 32 bit floats) to the GPU every time I wanted to draw them, but this wasn't efficient, so I switched to Vertex Buffer Objects to cache my vertex arrays in the GPU.
It's working, but I was wondering if there's a way to determine the size of a given VBO later on without going back to the original vertex arrays? Here's the process I'm struggling with:
I have a vertex array of, for example, six 32 bit floats.
I send my vertex array to the GPU via OpenGL-ES where it's stored in a VBO - to which I retain a handle.
My vertex array is redundant at this point so I delete it.
Later on I use the handle to make OpenGL-ES draw something, but at this point I'd also like to know how to determine the size of the vertex array that was originally used to create the VBO. I now have just the VBO handle - can I re-determine somehow that I'd stored six 32 bit floats in this VBO?
I'm probably missing something really obvious.
Thanks for any suggestions!
Doh! Just found it:
int nBufferSize = 0;
glGetBufferParameteriv(GL_ARRAY_BUFFER, GL_BUFFER_SIZE, &nBufferSize);
int originalVertexArraySize = ( nBufferSize / sizeof(float) );