I'm developing a little game in ThreeJs and have stumbled upon a problem I seem not to understand.
Im creating a game instance, after it I'm loading my enemies which are added to an array of enemies. Before I can do it i load my 3D objects with loader.load('assets/data/', callback()) and then create the enemies.
However there are two problems here. As soon as the callback is called my renderer show 2 programs running and the second problem is I'm having huge amount of vertices and faces on only 10 enemy meshes!
Is it that i must go back to blender and set the vertices/faces lower somehow or what am I doing wrong?
Just hint if you need more info!
Regards.
Related
I’ve designed a 3D model in SketchUp and I didn’t use any texture. I’m faced with an issue related with lagging on mouse move and rotate process. When I exported the model by Dae format and imported to the three js online editor (three js online editor) mouse movement is being very slow. I think it occurs fps drop. I couldn’t understand what’s problem with my model that I designed. I need your suggestions and ideas how to resolve this issue. Thanks for your support. I’ve uploaded 3D model’s image. Please take a look.
Object Count: 98.349, Vertices: 2,107.656, Triangles: 702.552
Object Count: 98.349,
The object count results in an equal number draw calls. Such a high value will degrade the performance no matter how complex the respective geometry eventually is.
I suggest you redesign the model and ensure to merge individual objects as much as possible. Also try to lower the number of vertices and faces.
Keep in mind that three.js does not automatically merge or batch render items. So it's your responsibility to optimize assets for rendering. It's best to do this right when designing the model. Or in code via methods like BufferGeometryUtils.mergeBufferGeometries() or via instanced rendering.
I'm currently rendering geological data, and have done so successfully with good results. To clarify, I input a matrix of elevations and output a single static mesh. I do this by creating a single plane for each elevation point, then, after creating all of these individual planes, merge them into a single mesh.
I've been running at 60 FPS even on a Macbook Air, but I want to push the limits. Is using a single PlaneGeometry as a heightmap as described in other terrain examples more efficient, or is it ultimately the same product at the end of the process?
Sorry for a general question without code examples, but I think this is a specific enough topic to warrant a question.
From my own tests that I spent a couple of days running, creating individual custom geometries and merging them as they are created is wildly inefficient, not just in the render loop, but also during the loading process.
The process I am using now is creating a single BufferGeometry with enough width and height segments to contain the elevation data, as described in this article.
As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.
we've successfully implemented a demo project based on the earth examples in the helix 3d toolkit and we are creating an effect where the world rotates by using a timer and AddRotateForce on the camera controller. The problem we are having is that when more than a couple hundred worldpoints added to the HelixViewport3D children collection, the rotation effect slows down to the point of not being usable.
is there a better way to add additional children the the world sphere without affecting performance dramatically?
I have a hierarchical animated model in DirectX which loads and animates based on the following DirectX sample: http://msdn.microsoft.com/en-us/library/ee418677%28VS.85%29.aspx
As good as the sample is it does not really go into some of the details of animation that I'd like. For example, if I have a mesh which has a running animation and a throwing animation as seperate animation sets how can I get the throwing animation to occur for bones above the hip and the walking animation to occur for bones underneath the hip?
Also if I wanted to for example have the person lean left or right would I simply have to find the bone for the hip and multiplay a rotation matrix by its matrix? In this case I think the matrix is m_amxBoneOffsets?
Composing multiple animations to a single one is usually the job of an animation system, something that is way out of scope of the D3D sample.
Let's look at your 2 examples:
running and throwing
Well, in this case you could apply the animation for the lower part of the body from the running animation and the animation for the upper part of the body from the throwing animation. And you'd get a very crappy result.
The how is just a matter of knowing which bones are where in the bone palette (something that depends on how they are stored, and in which order, but nothing inherently hard. The definite reference should be the documentation of the tool generating the animation data)
In practice, you're better off with a blending of the 2 animation. This is, in general, is hard, and software packages exist out there that do this for you. Gamebryo, e.g.
Or, an animation of a running guy who throws is different enough from a standing guy who throws that you might be better off having 2 animations.
Leaning
If you apply a rotation matrix to the root bone, you'll simply rotate your whole character.
Now if you rotate the next bone in the hierarchy (from the spine), you'll get all the bones that depend on it to rotate likewise. It will probably do what you want, but there's a sure way to find out. Try it!
Well the thing is the running animation SHOULD affect the throwing animation slightly. What you need to look into is animation blending.
I'm sure Valve wrote a good paper on how they implemented it in Counter-strike many years ago. Its not on the valve site though so I'm not sure where I got this memory from ...