What are the different ways to handle animations for 3D games?
Do you somehow program the animations in the modeler, and thus the file, than read and implement them in the game, or do you create animation functions to animate your still vectors?
Where are some good tutorials for the programming side of this? Google only gave me the modeler's side.
In production environments, animators use specialized tools such as Autodesk's 3DS Max to generate keyframed animations of 3D models. For each animation for each model, the animator constructs a number of poses for the model called the keyframes, which are then exported out into the game's data format.
The game then loads these keyframes, and to animate the model at a particular time, it picks the two nearest keyframes and interpolates between them to give a smooth animation, even with a small number of keyframes.
Animated models are typically constructed with a bone hierarchy. There is a root bone, which controls the model's location and orientation within the game world. All of the other bones are defined relative to some parent bone to create a tree. Each vertex of the model is tied to a particular bone, so the model can be controlled with a much smaller number of parameters: the relative positions, orientations, and scales of each bone.
Smooth skinning is a technique used to improve the quality of animations. With smooth skinning, a vertex is not tied to just one bone, but it can be tied to multiple bones (usually a hard limit, such as 4, is set; vertices rarely need more than 3) with corresponding weights. This makes the animator's job harder, since he has to do a lot more work with vertices near joints, but the result is a much smoother animation with less distortion around the joints.
Alternatively, some games use procedural animation, whereby the animation data is constructed at runtime. The bone positions and orientations are computed according to some algorithm, such as inverse kinematics or ragdoll physics. Other options are of course available, but they must be coded by programmers.
Instead of procedurally animating the bones and using forward kinematics to determine the locations of all of the model's vertices, another option is to just procedurally generate the position of each vertex on its own. This allows for more complex animations that aren't bound by bone hierarchies, but of course it's much more complicated and much harder to do well.
Different ways to handle animation in games?
For characters, typically:
Skeletal Deformation
Blend shapes
Sometimes even fully animated meshes (LA. Noire did that)
In the recent Tomb Raider we use a compute job to simulate thousands of splines for the hair, and the vertices are controlled by these splines with a vertex shader.
Often these are driven by key-frame animation which are interpolated at runtime, and sometimes we add code to drive the influences (bones, blendshape weights, etc.) procedurally.
For some simple objects that move rigidly, sometimes the design will have animation control over the whole object without having to deform it. Sometimes simple things like shrubs can be made to move by using something akin to blendshapes where you have a full mesh pose in a few extremes and you kind of blend between those extremes.
As for tutorials... well... Are you interested in writing the math for the animation yourself or are you more interested in getting some animated character in your game and focus on the gameplay?
For just doing the gameplay, Unity is a pretty great platform to play around with.
If you are however interested in understanding the math for animation or games in general, here's what I'd read up on as a primer:
Matrices used for 3d transforms (make sure you understand the commutative, associative and distributive laws, and learn what each row and column represents - they are simpler than they seem)
Quaternions for rotations (less intuitive, but closely coupled to an axis and rotation)
And don't leave out the basics:
Vector Dot Product
Vector Cross Product
Also like 10 years ago I wrote a simple animation library, which is pretty simple, free from most of the more advance concepts so it's not a bad place to look if you want to get a basic idea of how it works: http://animadead.sf.net
Good Luck!
Related
I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...
So, I want to start to make a game engine and I realized that I would have to draw 3D Objects and GUI(Immediate Mode) at the same time.
3D objects will use the perspective projection matrix and as GUI is in 2D space I will have to use Orthographic projection matrix.
So how can I implement that please anyone guide me. I'm not one of the professional Graphics programmers.
Also I'm using DirectX 11 so keep it that way.
To preface my answer, when I say "draw at the same time", I mean all drawing that takes place with a single call to ID3D11DeviceContext::Draw (or DrawIndexed/DrawAuto/etc). You might mean something different.
You do not required to draw objects with orthographic and perspective projections at the same time, and this isn't very commonly done.
Generally the projection matrix is provided to a vertex shader via a shader constant (or frequently via a concatenation of the World, View and Projection matrices). When you made a draw of a perspective object, you would bind one set of constants, when drawing an orthographic one, you'd set different ones. Frequently, different shaders are used to render perspective and orthographic objects, because they generally have completely different properties (eg. lighting, etc.).
You could draw the two different types of objects at the same time, and there are several ways you could accomplish that. A straightforward way would be to provide both projection matrices to the vertex shader, and have an additional vertex stream which determines which projection matrix to use.
In some edge cases, you might get some small performance benefit from this sort of batching. I don't suggest you do that. Make you life easier and use separate draw calls for orthographic and perspective objects.
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o
I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.