Performance cost of rendering tile in LibGDX - performance

I am programming tile based strategy game, so I have orthographic view. The rendered screen has 40*20 tiles wide and each tile has got 50*50px texture. I have one layer of tiles (let's say grass) and I place walls on top of it. The game runs smoothly at 60fps now. But do I need to remove the underlaying grass tile, when I place wall over it? Because the grass is rendered before the wall and so the wall overdraws it, but I am worried about the performance cost. I render it on SpriteBatch and there is no documentation on how does the inner processes in SpriteBatch work.
So the question is: Is it highly performance unfriendly to render tiles which are not visible anyway?

In my game I have tiled levels with 100-200x50 (one tile =90x90). Level has 4 layers (includes physic one). 10% of all tile are animated.
I don't have any problems with rendering of such map even on devices like Sumsung galaxy s2. As far as I know for rendering tiled maps libgdx uses sprite cashing technology. Such way even map with huge amount of tiles should works smoothly.
So I think you should not worry about performance with tiled map, you can remove invisible tiles and in theory it is good thing to do, but in practice it will impact on performance only if you have really huge amount of different invisible tiles.

Related

Does adding many game GameObjects in scene (not rendered) inefficient for performance? [Unity]

I'm using Unity 4.6 to develop a 2D game. I want to know if having a lot of GameObjects in the scene (out of the camera's sight) has a considerable influence on performance.
For example, is it efficient to make an scrollable list of names (like 1000 of them)? (each one is a GameObject and has a text, a button etc.)
I mask them in a specified area (for example 10 of them are visible at the same time).
Thanks in advance!
Depends on whether or not the objects have visible components. If they do, the engine will draw them even if they are 'off-camera'. A game object by itself has a pretty light load - a tile based game could have thousands in memory. You'll want to toggle the visibility of sprites if you plan on drawing a large number to the scene off-camera. This is where a SpriteManager comes in. It'll check to see if the sprite is in the camera's rectangle and disabled sprites that aren't. There is a semi-offical exmaple here that is good if a little complicated:
http://wiki.unity3d.com/index.php?title=SpriteManager

Do elements drawn outside the clip plane affect OpenGL performance?

OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.
OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.

Implementing terrains in XNA similar to Battle Zone (1980)

I am developing a 3D game for Windows Phone that includes terrains and volcanoes at infinite distance similar to Battle Zone (1980) by Atari Inc. The player can never touch the terrains no matter how far player drives. Currently, to implement this I am mapping a 2D texture inside the wall of cylinder. The cylinder is also moving with the player so that the player can never reach terrains. I am not sure whether this is a good method to implement terrains as I am facing problems like distortion of texture when mapping it on the wall of cylinder.
Please suggest me methods to implement a view of terrains in XNA similar to Battle Zone?
normally instead of cylinder developers use box (so-called SkyBox)
It has less polygons and in general less distortion (could be some at edges)
To make it look more real some devs like Valve use off-screen render in first pass that include skybox + some distant models with low details and moving cloud sprites or textured ring with alpha. Both points of view are synchronised (main camera and off-screen camera) then (without clearing colour buffer) they render final scene on top. Thanks to that far building will move a bit and scene surrounding will look less plain. To avoid z-buffer cleaning between passes they simply doing first pass under the floor(literally) of the scene of main pass.

Is there a way to pre-render a virtual panoramic scene?

I would like to put a photorealistic virtual scene on a tablet so when the user rotates the tablet, it shows as if the tablet is a window to an virtual world.
Pre-rendered scenes can be rendered photorealistic, while real-time rendering has a "computer-made look". Given that for one scene, the POV can be rotated but not translated in space, is it possible that a pre-rendered virtual panoramic scene give an immersive impression?
I doubt that this is easy, since rotating the view point will cause some sort of distortion. This kind of distortion is easy for apps like Starwalk, but difficult for photos. Can anyone point me out a direction?
I know that this will be tremendously easy for restricting motion in only one direction, but I would like the user to have a full 3d experience.
You need to either warp the photographs before applying them as textures to your "sky dome" or use non uniform texture coordinates. If done right this will even out most of the distortions giving a more realistic appearance.
Another alternative is to use more photographs so that you are only actually using the central area of each one.
I've found that http://code.google.com/p/panoramagl/ can render cubic, spherical and cylindrical panoramic images, so the problem transforms to how to make render a panorama which can be solved by stitching. I will still leave this answer open to see if anyone else has better answers.

Programming 3D animations?

What are the different ways to handle animations for 3D games?
Do you somehow program the animations in the modeler, and thus the file, than read and implement them in the game, or do you create animation functions to animate your still vectors?
Where are some good tutorials for the programming side of this? Google only gave me the modeler's side.
In production environments, animators use specialized tools such as Autodesk's 3DS Max to generate keyframed animations of 3D models. For each animation for each model, the animator constructs a number of poses for the model called the keyframes, which are then exported out into the game's data format.
The game then loads these keyframes, and to animate the model at a particular time, it picks the two nearest keyframes and interpolates between them to give a smooth animation, even with a small number of keyframes.
Animated models are typically constructed with a bone hierarchy. There is a root bone, which controls the model's location and orientation within the game world. All of the other bones are defined relative to some parent bone to create a tree. Each vertex of the model is tied to a particular bone, so the model can be controlled with a much smaller number of parameters: the relative positions, orientations, and scales of each bone.
Smooth skinning is a technique used to improve the quality of animations. With smooth skinning, a vertex is not tied to just one bone, but it can be tied to multiple bones (usually a hard limit, such as 4, is set; vertices rarely need more than 3) with corresponding weights. This makes the animator's job harder, since he has to do a lot more work with vertices near joints, but the result is a much smoother animation with less distortion around the joints.
Alternatively, some games use procedural animation, whereby the animation data is constructed at runtime. The bone positions and orientations are computed according to some algorithm, such as inverse kinematics or ragdoll physics. Other options are of course available, but they must be coded by programmers.
Instead of procedurally animating the bones and using forward kinematics to determine the locations of all of the model's vertices, another option is to just procedurally generate the position of each vertex on its own. This allows for more complex animations that aren't bound by bone hierarchies, but of course it's much more complicated and much harder to do well.
Different ways to handle animation in games?
For characters, typically:
Skeletal Deformation
Blend shapes
Sometimes even fully animated meshes (LA. Noire did that)
In the recent Tomb Raider we use a compute job to simulate thousands of splines for the hair, and the vertices are controlled by these splines with a vertex shader.
Often these are driven by key-frame animation which are interpolated at runtime, and sometimes we add code to drive the influences (bones, blendshape weights, etc.) procedurally.
For some simple objects that move rigidly, sometimes the design will have animation control over the whole object without having to deform it. Sometimes simple things like shrubs can be made to move by using something akin to blendshapes where you have a full mesh pose in a few extremes and you kind of blend between those extremes.
As for tutorials... well... Are you interested in writing the math for the animation yourself or are you more interested in getting some animated character in your game and focus on the gameplay?
For just doing the gameplay, Unity is a pretty great platform to play around with.
If you are however interested in understanding the math for animation or games in general, here's what I'd read up on as a primer:
Matrices used for 3d transforms (make sure you understand the commutative, associative and distributive laws, and learn what each row and column represents - they are simpler than they seem)
Quaternions for rotations (less intuitive, but closely coupled to an axis and rotation)
And don't leave out the basics:
Vector Dot Product
Vector Cross Product
Also like 10 years ago I wrote a simple animation library, which is pretty simple, free from most of the more advance concepts so it's not a bad place to look if you want to get a basic idea of how it works: http://animadead.sf.net
Good Luck!

Resources