In a 2D scene in Godot, I have a factory belt transporting items. The items are physics objects (with linear damping = 1 so they do not accelerate too much), the belts have an Area2D with a gravity force that transports the items. The belts have a sprite animation showing the belt moving.
I can sync the sprite animation reasonably well to the speed the items reach on the belts by adjusting the AnimatedSprite's speed scale. Any divergence looks odd visually of course, as the items will either be moving faster or slower than the belt.
I am worried that differences in FPS across devices may cause the visual correctness to break.
Should I just stop worrying? Is there a better way to implement belts moving items? I am aware that I can implement the item movement myself instead of relying on the physics engine - but this won't solve the problem of syncing with the belt animation.
Related
Is there a way to universally multiply physics2D calculations on the canvas?
I'm trying to make a set of canvas UI elements with 2D physic properties. The objects contain images and text, but need to respond to gravity, impacts, and overlapping collision boxes with other GUI elements.
I've added 2D RigidBody and boxCollider components to my objects. However, they move very slowly. If given a gravity, they fall slowly. If overlapped, they push each other apart slowly.
I've figured out that this is due to the canvas having a very large scale. My objects are effectively 'very big and very far away'.
I can't modify the canvas scale. It needs to be huge or I get render artifacts.
I can't just modify gravity because it doesn't provide a universal fix. Things fall faster, but they don't push apart or spring right.
I can't modify the timestep because it affects the whole world, not just the canvas.
My canvas objects have widths akin to 80, where unity physics expects widths akin to 1. How can I get them to behave like they have a width of 1?
Is there some universal scaling factor for canvas based physics, or am I simply mis-using the canvas for something it is not intended for?
if you are still having this problem, the answer to fixing the movement rates of your scaled-up objects is to scale up your movement forces as well as your gravity. If you can't get certain elements to work right, use a forcepush that you can set to any strength.
I'm using Unity 4.6 to develop a 2D game. I want to know if having a lot of GameObjects in the scene (out of the camera's sight) has a considerable influence on performance.
For example, is it efficient to make an scrollable list of names (like 1000 of them)? (each one is a GameObject and has a text, a button etc.)
I mask them in a specified area (for example 10 of them are visible at the same time).
Thanks in advance!
Depends on whether or not the objects have visible components. If they do, the engine will draw them even if they are 'off-camera'. A game object by itself has a pretty light load - a tile based game could have thousands in memory. You'll want to toggle the visibility of sprites if you plan on drawing a large number to the scene off-camera. This is where a SpriteManager comes in. It'll check to see if the sprite is in the camera's rectangle and disabled sprites that aren't. There is a semi-offical exmaple here that is good if a little complicated:
http://wiki.unity3d.com/index.php?title=SpriteManager
I am programming tile based strategy game, so I have orthographic view. The rendered screen has 40*20 tiles wide and each tile has got 50*50px texture. I have one layer of tiles (let's say grass) and I place walls on top of it. The game runs smoothly at 60fps now. But do I need to remove the underlaying grass tile, when I place wall over it? Because the grass is rendered before the wall and so the wall overdraws it, but I am worried about the performance cost. I render it on SpriteBatch and there is no documentation on how does the inner processes in SpriteBatch work.
So the question is: Is it highly performance unfriendly to render tiles which are not visible anyway?
In my game I have tiled levels with 100-200x50 (one tile =90x90). Level has 4 layers (includes physic one). 10% of all tile are animated.
I don't have any problems with rendering of such map even on devices like Sumsung galaxy s2. As far as I know for rendering tiled maps libgdx uses sprite cashing technology. Such way even map with huge amount of tiles should works smoothly.
So I think you should not worry about performance with tiled map, you can remove invisible tiles and in theory it is good thing to do, but in practice it will impact on performance only if you have really huge amount of different invisible tiles.
I have an animated mesh in the .x format I've loaded with D3DXLoadMeshHierarchyFromX and have an animation controller for it. The mesh has two animations, one for walking and one for throwing where the walk animation.
Is it at all possible to blend the two animations in such a way that both animations can run together with the walk taking priority for frames below the hip while throwing animation takes priority for frames above it? If it is will the effect look convincing therefore worth pursuing? Do game developers typically blend animations in such a way to get all the different animations they wish or do they simply create multiple versions of the same animation, i.e. walking while throwing, standing while throwing, walking without throwing?
You can set High and low priority animation tracks with ID3DXAnimationController::SetTrackPriority. You can then blend between them using ID3DXAnimationController::SetPriorityBlend.
What are the different ways to handle animations for 3D games?
Do you somehow program the animations in the modeler, and thus the file, than read and implement them in the game, or do you create animation functions to animate your still vectors?
Where are some good tutorials for the programming side of this? Google only gave me the modeler's side.
In production environments, animators use specialized tools such as Autodesk's 3DS Max to generate keyframed animations of 3D models. For each animation for each model, the animator constructs a number of poses for the model called the keyframes, which are then exported out into the game's data format.
The game then loads these keyframes, and to animate the model at a particular time, it picks the two nearest keyframes and interpolates between them to give a smooth animation, even with a small number of keyframes.
Animated models are typically constructed with a bone hierarchy. There is a root bone, which controls the model's location and orientation within the game world. All of the other bones are defined relative to some parent bone to create a tree. Each vertex of the model is tied to a particular bone, so the model can be controlled with a much smaller number of parameters: the relative positions, orientations, and scales of each bone.
Smooth skinning is a technique used to improve the quality of animations. With smooth skinning, a vertex is not tied to just one bone, but it can be tied to multiple bones (usually a hard limit, such as 4, is set; vertices rarely need more than 3) with corresponding weights. This makes the animator's job harder, since he has to do a lot more work with vertices near joints, but the result is a much smoother animation with less distortion around the joints.
Alternatively, some games use procedural animation, whereby the animation data is constructed at runtime. The bone positions and orientations are computed according to some algorithm, such as inverse kinematics or ragdoll physics. Other options are of course available, but they must be coded by programmers.
Instead of procedurally animating the bones and using forward kinematics to determine the locations of all of the model's vertices, another option is to just procedurally generate the position of each vertex on its own. This allows for more complex animations that aren't bound by bone hierarchies, but of course it's much more complicated and much harder to do well.
Different ways to handle animation in games?
For characters, typically:
Skeletal Deformation
Blend shapes
Sometimes even fully animated meshes (LA. Noire did that)
In the recent Tomb Raider we use a compute job to simulate thousands of splines for the hair, and the vertices are controlled by these splines with a vertex shader.
Often these are driven by key-frame animation which are interpolated at runtime, and sometimes we add code to drive the influences (bones, blendshape weights, etc.) procedurally.
For some simple objects that move rigidly, sometimes the design will have animation control over the whole object without having to deform it. Sometimes simple things like shrubs can be made to move by using something akin to blendshapes where you have a full mesh pose in a few extremes and you kind of blend between those extremes.
As for tutorials... well... Are you interested in writing the math for the animation yourself or are you more interested in getting some animated character in your game and focus on the gameplay?
For just doing the gameplay, Unity is a pretty great platform to play around with.
If you are however interested in understanding the math for animation or games in general, here's what I'd read up on as a primer:
Matrices used for 3d transforms (make sure you understand the commutative, associative and distributive laws, and learn what each row and column represents - they are simpler than they seem)
Quaternions for rotations (less intuitive, but closely coupled to an axis and rotation)
And don't leave out the basics:
Vector Dot Product
Vector Cross Product
Also like 10 years ago I wrote a simple animation library, which is pretty simple, free from most of the more advance concepts so it's not a bad place to look if you want to get a basic idea of how it works: http://animadead.sf.net
Good Luck!