I have following task:
There is a mesh. Mesh contains several frames. At one time moment displayed one frame.
How can I render part of mesh?
For example in directX exists following method:
HRESULT IDirect3DDevice9::DrawPrimitive(
D3DPRIMITIVETYPE PrimitiveType,
UINT StartVertex,
UINT PrimitiveCount
);
And I can set up displacement of start vertex and length.
Can I do something like this in unity?
Thx!
Related
Coded using:
Using ThreeJS v0.130.1
Framework: Angular 12, but that's not relevant to the issue.
Testing on Chrome browser.
I am building an application that gets more than 100K points. I use these points to render a THREE.Points object on the screen.
I found that default THREE.PointsMaterial does not support lighting (the points are visible with or without adding lights to the scene).
So I tried to implement a custom ShaderMaterial. But I could not find a way to add lighting to the rendered object.
Here is a sample of what my code is doing:
Sample App on StackBlitz showing my current attempt
In this code, I am using sample values for point cloud data, normals and color but everything else is similar to my actual application. I can see the 3D object, but need more proper lighting using normals.
I need help or guidance to implement the following:
Add lighting to custom shader material. I have Googled and tried many things, no success so far.
Using normals, show the effects of lighting (In this sample code, the normals are fixed to Y-axis direction, but I am calculating them based on some vector logic in actual application). So calculating normals is already done, but I want to use them to show light shine/shading effect in the custom shader material.
And in this sample, color attribute is set to fixed red color, but in actual application I am able to apply colors using UV range from a texture to color attribute.
Please advise how/if I can get lighting based on normals for Point Cloud. Thanks.
Note: I looked at this Stackoveflow question but it only deals with changing the alpha/transparency of points and not lighting.
Adding lighting to a custom material is a very complex process. Especially since you could use Phong, Lambert, or Physical lighting methods, and there's a lot of calculations that need to pass from the vertex to the fragment shader. For instance, this segment of shader code is just a small part of what you'd need.
Instead of trying to re-create lighting from scratch, I recommend you create a PlaneGeometry with the material you'd like (Phong, Lambert, Physical, etc...) and use an InstancedMesh to create thousands of instances, just like in this example.
Based on that example, the pseudo-code of how you could achieve a similar effect is something like this:
const count = 100000;
const geometry = new PlaneGeometry();
const material = new THREE.MeshPhongMaterial();
mesh = new THREE.InstancedMesh( geometry, material, count );
mesh.instanceMatrix.setUsage( THREE.DynamicDrawUsage ); // will be updated every frame
scene.add( mesh );
const dummy = new THREE.Object3D();
update() {
// Sets the rotation so it's always perpendicular to camera
dummy.lookAt(camera);
// Updates positions of each plane
for (let i = 0; i < count; i++){
dummy.position.set( x, y, z );
dummy.updateMatrix();
mesh.setMatrixAt( i ++, dummy.matrix );
}
}
The for() loop would be the most expensive part of each frame, so if you need to update it on each frame, you might want to calculate this in the vertex shader, but that's another question altogether.
I want to make a toon border effect. For it, I'll use the depth value of the neighbor pixels of each pixel to determine if it is or isn't supposed to be blacked. How can I access that information inside the fragment shader?
When you render your scene in a normal way (vertex shader, then fragment shader - single pass) then in the fragment shader there is no way to access depth values for another pixels.
But:
You can render scene twice and perform some postprocessing effects. In the first run you store depth values and others (like normals, etc) in RenderTarget (in texture) then you use those textures in the second pass.
Here you have effect from XNA, but can be quickly ported to GLSL: http://xnameetingpoint.weebly.com/shader7f31.html
Here some link about Render to Texture: http://learningwebgl.com/blog/?p=1786
Hint: depth values will not be enough for border detection, you have to use normals as well. But it is covered in the above tutorial from XNA.
I'm writing a 2D RPG using the LWJGL and Java 1.6. By now, I have a 'World' class, which holds an ArrayList of Tile (interface with basic code for every Tile) and a GrassTile class, which makes the use of a Spritesheet.
When using Immediate mode to draw a grid of 64x64 GrassTiles I get around 100 FPS and do this by calling the .draw() method from each tile inside the ArrayList, which binds the spritesheet and draws a certain area of it (with glTexCoord2f()). So I heard it's better to use VBO's, got a basic tutorial and tried to implement them on the .draw() method.
Now there are two issues: I don't know how to bind only a certain area of a texture to a VBO (the whole texture would be simply glBindTexture()) so I tried using them with colours only.
That takes me to second issue: I got only +20 FPS (120 total) which is not really what I expected, so I suppose I'm doing something wrong. Also, I am making a single VBO for each GrassTile while iterating inside the ArrayList. I think that's kind of wrong, because I can simply throw all the tiles inside a single FloatBuffer.
So, how can I draw similar geometry in a better way and how can I bind only a certain area of a Texture to a VBO?
So, how can I draw similar geometry in a better way...
Like #Ian Mallett described; put all your vertex data into a single vertex buffer object. This makes it possible to render your map in one call. If your map get 1000 times bigger you may want to implement a camera solution which only draws the vertices that are being shown on the screen, but that is a question that will arise later if you're planning on a significantly bigger map.
...and how can I bind only a certain area of a Texture to a VBO?
You can only bind a whole texture. You have to point to a certain area of the texture that you want to be mapped.
Every texture coordinate relates to a specific vertex. Every tile relates to four vertices. Common tiles in your game will share the same texture, hence the 'tile map' name. Make use of that. Place all your tile textures in a texture sheet and bind that texture sheet.
For every new 'tile' you create, check whether the area is meant to be air, grass or ground and then point to the part of the texture that corresponds to what you intend.
Let's say your texture area in pixels are 100x100. The ground area is 15x15 from the lower left corner. Follow the logic above explains the example code being shown below:
// The vertexData array simply contains information
// about a tile's four vertices (or six
// vertices if you draw using GL_TRIANGLES).
mVertexBuffer.put(0, vertexData[0]);
mVertexBuffer.put(1, vertex[1]);
mVertexBuffer.put(2, vertex[2]);
mVertexBuffer.put(3, vertex[3]);
mVertexBuffer.put(4, vertex[4]);
mVertexBuffer.put(5, vertex[5]);
mVertexBuffer.put(6, vertex[6]);
mVertexBuffer.put(7, vertex[7]);
mVertexBuffer.put(8, vertex[8]);
mVertexBuffer.put(9, vertex[9]);
mVertexBuffer.put(10, vertex[10]);
mVertexBuffer.put(11, vertex[11]);
if (tileIsGround) {
mTextureCoordBuffer.put(0, 0.0f);
mTextureCoordBuffer.put(1, 0.0f);
mTextureCoordBuffer.put(2, 0.15f);
mTextureCoordBuffer.put(3, 0.0f);
mTextureCoordBuffer.put(4, 0.15f);
mTextureCoordBuffer.put(5, 0.15f);
mTextureCoordBuffer.put(6, 0.15f);
mTextureCoordBuffer.put(7, 0.0f);
} else { /* Other texture coordinates. */ }
You actually wrote the solution. The only difference is that you should upload the texture coordinates data to the GPU.
This is the key:
I am making a single VBO for each GrassTile while iterating inside the ArrayList.
Don't do this. You make a VBO once, and then you update it if necessary. Making textures, VBOs, shaders, is the slowest possible use of OpenGL--no wonder you're getting problematic framerates--you're doing it O(n) times, each frame.
I think that's kind of wrong, because I can['t?] simply throw all the tiles inside a single FloatBuffer.
You only gain performance when you batch draw calls. This means that when you draw your tiles, you should draw all of them at once with one VBO.
//Initialize
Make a single VBO (or two: one for vertex, one for texture
coordinates, whatever--the key point is O(1) VBOs).
Fill your VBO with ALL of your tiles' data.
//Main loop
while (true) {
Draw the VBO with a single draw call,
thus drawing all your tiles all at once.
}
I am trying to write my particle system for OpenGL ES 2.0. Each particle is made up of 4 vertexes, forming the little square where a transparent texture is drawn.
The problem is: each particle has its own properties (color, position, size), that are constant across the 4 vertexes of that particle. The only variation for each vertex is what corner of the square it is.
If I am to send the properties of the particle via uniform variables, I must do:
for(each particle) { // do maaaany times
glUniform*(...);
glDrawArray(...); // only draw 4 vertexes
};
this is clearly inefficient, since I will only draw 4 vertexes per glDrawArray call.
If I send this properties via attribute variables, I must fill the same information 4 times for each fragment in the attribute buffer:
struct particle buf[n];
for(each particle) {
struct particle p;
p = ...; // Update particle
buf[i+0] = buf[i+1] = buf[i+2] = buf[i+3] = p;
};
glBufferData(..., buf, ...);
// then draw everithing once afterwards...
what is memory inefficient and seems very ugly to me. So what is the solution to this problem? What is the right way to pass parameters that change for each few vertexes to the shader?
Use point sprites. The introduction is very explicit about how to solve your problem.
You can also combine the use of point sprites with another extension, point_size_array.
...
As Christian Rau has commented, the point_size_array is no more usefull using programmable pipeline: set the maximum point size as usual, then discard fragments basing on their distance from the point center, derived from texture coordinates generated by OpenGL. The particle size shall be sent via additional attribute.
GL ES doesn't really have a good solution to this. Desktop OpenGL allows for instancing and various other tricks, but ES just doesn't have those.
You can use a Uniform Buffer Object. Note that this feature is only available on D3D10+ hardware.
Send the information via a texture. I'm not sure that texture sampling is supported in opengl-es 2.0 vertex shaders, but if it is, then that would be optimal.
I have an image that is a combination of the RGB and depth data from a Kinect camera.
I'd like to do two things, both in WebGL if possible:
Create 3D model from the depth data.
Project RGB image onto model as texture.
Which WebGL JavaScript engine should I look at? Are there any similar examples, using image data to construct a 3D model?
(First question asked!)
Found that it is easy with 3D tools in Photoshop (3D > New Mesh From Grayscale): http://www.flickr.com/photos/forresto/5508400121/
I am not aware of any WebGL framework that resolves your problem specifically. I think you could potentially create a grid with your depth data, starting from a rectangular uniform grid and moving each vertex to the back or to the front (Z-axis) depending on the depth value.
Once you have this, then you need to generate the texture array and from the image you posted on flickr I would infer that there is a mapping one-to-one between the depth image and the texture. So generating the texture array should be straightforward. You just map the correspondent coordinate (s,t) on the texture to the respective vertex. So for every vertex you have two coordinates in the texture array. Then you bind it.
Finally you need to make sure that you are using the texture to color your image. This is a two step process:
First step: pass the texture coordinates as an "attribute vec2" to the vertex shader and save it to a varying vec2.
Second step: In the fragment shader, read the varying vec2 that you created on step one and use it to generate the gl_FragColor.
I hope it helps.