How to reduce Drawcalls for animated GameObjects? - animation

I have a scenario such that I have a wall and whenever player hits the walls it falls into pieces .Each of the piece is the Child GO of that wall.Each of the piece is using same material .So I tried Dynamic Batching on that but it didn't reduce the draw calls.
Then I tried CombineChildren.cs it worked and combined all meshes thus reducing the drawcalls but whenever my player hit the wall it didn't played the animation.
I cannot try SkinnedMeshCombiner.cs from this wiki with the answer from this link check this
beacause my game objects have Mesh renders rather than Skinned Mesh renderer
Is there any other solution I can do for that?

So I tried Dynamic Batching on that but it didn't reduce the draw
calls.
Dynamic batching is almost the only way to go if you want to reduce draw calls for moving object (unless you try to implement something similar on your own).
However it has some limitations as explained in the doc.
Particularly:
limits on vertex numbers for objects to be batched
all must share the same material (not different instances of the same material - be sure to check this)
no lightmapped objects
no multipass shaders
Be sure all limitations listed in the doc are met, then dynamic batching should work.
Then I tried CombineChildren.cs it worked and combined all meshes thus
reducing the drawcalls but whenever my player hit the wall it didn't
played the animation.
That's because after been combined, all GameObjects are combined into an unique one. You can't move them indipendently no more.

Related

Optimizing terrain rendering in Three.js

I'm currently rendering geological data, and have done so successfully with good results. To clarify, I input a matrix of elevations and output a single static mesh. I do this by creating a single plane for each elevation point, then, after creating all of these individual planes, merge them into a single mesh.
I've been running at 60 FPS even on a Macbook Air, but I want to push the limits. Is using a single PlaneGeometry as a heightmap as described in other terrain examples more efficient, or is it ultimately the same product at the end of the process?
Sorry for a general question without code examples, but I think this is a specific enough topic to warrant a question.
From my own tests that I spent a couple of days running, creating individual custom geometries and merging them as they are created is wildly inefficient, not just in the render loop, but also during the loading process.
The process I am using now is creating a single BufferGeometry with enough width and height segments to contain the elevation data, as described in this article.

Efficiently rendering tiled map using SpriteKit

As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.

What is the best approach for making large number of 2d rectangles using Three.js

Three.JS noob here trying to do 2d visualization.
I used d3.js to make an interactive visualization involving thousands of nodes (rectangle shaped). Needless to say there were performance issues during animation because Browsers have to create an svg DOM element for every one of those 10 thousand nodes.
I wish to recreate the same visualization using WebGl in order to leverage hardware acceleration.
Now ThreeJS is a library which I have choosen because of its popularity (btw, I did look at PixiJS and its api didn't appeal to me). I am wanting to know what is the best approach to do 2d graphics in three.js.
I tried creating one PlaneGeometry for every rectangle. But it seems that 10 thousand Plane geometries are not the say to go (animation becomes super duper slow).
I am probably missing something. I just need to know what is the best primitive way to create 2d rectangles and still identify them uniquely so that I can interact with them once drawn.
Thanks for any help.
EDIT: Would you guys suggest to use another library by any chance?
I think you're on the right track with looking at WebGL, but depending on what you're doing in your visualization you might need to get closer to the metal than "out of the box" threejs.
I recommend taking a look at GLSL and taking a look at how you can implement your visualization using vertex and fragment shaders. You can still use threejs for a lot of the WebGL plumbing.
The reason you'll probably need to get directly into GLSL shader work is because you want to take most of the poly manipulation logic out of javascript, at least as much as is possible. Any time you ask js to do a tight loop over tens of thousands of polys to update position, etc... you are going to struggle with CPU usage.
It is going to be much more performant to have js pass in data parameters to your shaders and let the vertex manipulation happen there.
Take a look here: http://www.html5rocks.com/en/tutorials/webgl/shaders/ for a nice shader tutorial.

What are the non-trivial use-cases of attributes in WebGL/OpenGL in general?

I'm making a WebGL game and eventually came up with a pretty convenient concept of object templates, when the game objects of the same kind (say, characters of the same race) are using the same template (which means: buffers, attributes and shader program), and are instanced from that template by specifying a set of uniforms (which are, in fact, the most common difference between the same-kind objects: model matrix, textures, bones positions, etc). For making independent objects with their own deep-copy of buffers, I just deep-copy and re-initialize the original template and start instantiating new objects from it.
But after that I started having doubts. Say, if I start using morphing on objects, by explicit editing of the vertices, this approach will require me to make a separate template for every object of such kind (otherwise, they would start morphing in exactly the same phase). Which is probably fine for this very case, 'cause I'll most likely need to recalculate normals and even texture coordinates, which means – most of the buffers.
But what if I'm missing some very common case of using attributes, say, blood decals, which will require me to update only a small piece of the buffer? In that case, it would be much more reasonable to have two buffers for each object: a common one that is shared by them all and the one for blood decals, which is unique for every single of them. And, as blood is usually spilled on everything, this sounds pretty reasonable, so that we would save a lot of space by storing vertices, normals and such without their unnecessary duplication.
I haven't tried implementing decals yet, so honestly not even sure if implementing them using vertex painting (textured or not) is the right choice. But I'm also pretty sure there are some commonly used attributes aside from vertices, normals and texture coordinates.
Here are some that I managed to come up with myself:
decals (probably better to be modelled as separate objects?)
bullet holes and such (same as decals maybe?)
Any thoughts?
UPD: as all this might sound confusing, I want to clarify: I do understand that using as few buffers as possible is a good thing, this is exactly why I'm trying to use this templates concept. My question is: what are the possible cases when using a single buffer and a single element buffer (with both of them shared between similar objects) for a template is going to stab me in the back?
Keeping a giant chunk of data that won't change on the card is incredibly useful for saving bandwidth. Additionally, you probably won't be directly changing the vertices positions once they are on the card. Instead you will probably morph them with passed in uniforms in the Vertex shader through Skeletal animation. Read about it here: Skeletal Animation
Do keep in mind though, that in Key frame animation with meshes, you would keep a bunch of buffers on the card each in a different key frame pose of the animation. However, you would then load whatever two key frames you want to interpolate over in as attributes and then blend between them (You can have more than two). Keyframe Animation
Additionally, with the introduction of Transformation Feedback, (No you don't get to use it in WebGL, it became core in OpenGL 3.0, WebGL is based on OpenGL ES 2.0, which is based on OpenGL 2.0) you can start keeping calculated data GPU side. In other words, you can do a giant particle system simulation in the vertex or geometry shader and then store the calculated data into another buffer, then use that buffer in the next frame without having to have a round trip from the GPU to CPU Read about them here: Transform Feedback and here: Transform Feedback how to
In general, you don't want to touch buffers once they are on the card, especially every frame. Instead load several and use pointers to that data in shaders as attributes.

Looking to optimize the rendering of lines in webgl (using the three.js library)

Recently started learning webGL and decided to use the Three.js library.
Currently, in addition to, rendering over 100K of cubes, I'm also trying to render lines between those cubes (over 100K).
The problem with rendering occurs when I try to draw lines, NOT cubes. Rendering 100k cubes was relatively fast. Even rendering those 100K+ lines is relatively fast but when I try to zoom/pan using the TrackballControls, the FPS goes down to almost 0.
I've searched StackOverflow and various other sites in order to improve the performance of my application. I've used the merging geometries technique, the delayed rendering of lines ( basically x number of cubes/lines at a time using a timeout in js), and adjusting the appearance of the lines to require the most minimal rendering time.
Are there any other methods in constructing the lines so that the rendering/fps aren't affected? I'm building a group of lines and THEN adding it to the scene. But is there a way that merging is possible with lines? Should I be constructing my lines using different objects?
I just find it strange that I can easily render/pan/zoom/maintain a high fps with over 100k cubes but with lines (which is the most basic form of geometry besides a point), everything crashes.
You have found out (the hard way) where do graphics chip vendors put their focus on their devise drivers. But that is to be expected in 3D graphics as lines (yes the most basic geometry) are not used my many games so they do not receive much attention as polygons do. You can take a look at the example webgl_buffergeometry_lines.html which is probably the faster way to draw lines.

Resources