Applying temporal antialiasing? - animation

Can anyone recommend a workflow or an existing solution that lets me apply temporal antialiasing to an animation? I'm observing temporal artifacts when animating rendered graphics -- fast moving objects create ghosting artifacts.
I prototyped something in Mathematica that does the job, but it's quite slow, wondering if there's a faster open source solution. Example below blends 45 images per frame

Related

If Core Graphics uses Metal under the hood, can a Metal implementation run faster than a CG one? Why?

Let's say I want to develop a Paint app and need to implement a brush engine. For a raster brush, you basically need to stamp a texture on touch locations with a given spacing.
-- Task: Composite a small image (brush tip) over a bigger one.
I decided to build a prototype first in CG using a CGContext to render the stamps and found out it performed pretty well even with coalesced touches and a decent size canvas (CGContext output size).
However, since I need to paint onto really big textures (8000x6000 would be great), I decided to give metal a chance. I know that this task might be trivial for someone with a background in Metal but I'm new in this field. So I tried to use CIFilters (Metal backed) for compositing the brush over the canvas and displaying it in a custom MetalImageView: GTKView.
I thought having the canvas and the brush as CIImages and displaying them in a Metal Layer would already be more performant than the naive CG implementation. But it's not. The CIFilter approach renders the entire canvas every single stamp(at: Point), whether in CG I just refresh a small rect around that point.
Now, I think I could accomplish that with the CIFilter if I could change the extent that is computed. I don't know if that can be done with Core Image, but I'm sure in metal would be really easy for someone with experience.
-- Question: Can a pure metal implementation be faster stamping images than the CG one, given that CG runs with Metal under the hood? If so, how faster? Is it worth learning how to do it, or should I better spend that time improving the CG implementation?
Note that I'm asking for a raster brush, not a vector brush with Bezier Paths which is way easier to code and runs faster but textured brushes can't be used.
I really appreciate any help.
There is actually a chapter in the Core Image Programming Guide about that. They describe continuous painting into the same texture using the CIImageAccumulator class. You can also download the sample app.
I think performance-wise there shouldn't be a huge difference. You should be able to optimize heavily by telling Core Image the region of interest and domain of definition (extent) of your brush stroke filter. Then it should be able to render only the necessary parts of the image instead of the whole thing in every frame.

Dividing a sphere into multiple texture

I have a sphere with texture of earth that I generate on the fly with the canvas element from an SVG file and manipulate it.
The texture size is 16384x8192 , and less than this - it's look blurry on close zoom.
But this is a huge texture size and causing memory problems... (But it's look very good when it is working)
I think a better approach would be to split the sphere into 32 separated textures, each in size of 2048x2048
A few questions:
How can I split the sphere and assign the right textures?
Is this approach better in terms of memory and performance from a single huge texture?
Is there a better solution?
Thanks
You could subdivide a cube, and cubemap this.
Instead of having one texture per face, you would have NxN textures. 32 doesn't sound like a good number, but 24 for example does, (6x2x2).
You will still use the same amount of memory. If the shape actually needs to be spherical you can further subdivide the segments and normalize the entire shape (spherify it).
You probably cant even use such a big texture anyway.
notice the top sphere (cubemap, ignore isocube):
Typically, that's not something you'd do programmatically, but in a 3D program like Blender or 3D max. It involves some trivial mesh separation, UV mapping and material assignment. One other approach that's worth experimenting with would be to have multiple materials but only one mesh - you'd still get (somewhat) progressive loading. BUT
Are you sure you'd be better off with "chunks" loading sequentially rather than one big texture taking a huge amount of time? Sure, it'll improve a bit in terms of timeouts and caching, but the tradeoff is having big chunks of your mesh be textureless, which is noticeable and unasthetic.
There are a few approaches that would mitigate your problem. First, it's important to understand that texture loading optimization techniques - while common in game engines - aren't really part of threejs or what it's built for. You'll never get the near-seamless LODs or GPU optimization techniques that you'll get with UE4 or Unity. Furthermore webGL - while having made many strides over the past decade - is not ideal for handling vast texture sizes, not at the GPU level (since it's based on OpenGL ES, suited primarily for mobile devices) and certainly not at the caching level - we're still dealing with broswers here. You won't find a lot of webGL work done with vast textures of the dimensions you refer to.
Having said that,
A. A loader will let you do other things while your textures are loading so your user isn't staring at an 'unfinished mesh'. It lets you be pretty clever with dynamic loading times and UX design. Additionally, take a look at this gist to give you an idea for what a progressive texture loader could look like. A much more involved technique, that's JPEG specific, can be found here but I wouldn't approach it unless you're comfortable with low-level graphics programming.
B. Threejs does have a basic implementation of LOD although I haven't tinkered with it myself and am not sure it's useful for textures; that said, the basic premise to inquire into is whether you can load progressively higher-resolution files on a per-need basis, just like Google Earth does it for example.
C. This is out of the scope of your question - but I'd look into what happens under the hood in Unity's webgl export (which is based on threejs), and what kind of clever tricks are being employed there for similar purposes.
Finally, does your project have to be in webgl? For something ambitious and demanding, sometimes "proper" openGL / DX makes much more sense.

How to quickly create hundreds of biped animations?

I am a video game programmer working on building my own video game. I've decided that in order to build my game, I am going to need a large amount of animation files from 3DS Max.
My question is, what is the best approach to building a huge number of animation files? I'm looking to create 20 movement animations + 4 fighting styles * 18 attack types + 8 shooting animations + 10-20 magic casting animations for an estimated total of 128-138 animations (and probably more that I can't think of now).
I'm personally only planning on creating a small number of these animations myself, but I am trying to design the best workflow for creating a huge number of animations so that once I decide to create these animations, it is a feasible task.
I am familiar with how to create animations manually in 3ds max, but this approach seems slow, and would seem to take too many manhours to complete. I am vaguely familiar with motion capture, but I don't know any approaches for this or tutorials, and I don't know if this would work out at that scale.
Should be only few suggestions to make many animations quickly in low budget:
Avoid 3ds Max bones, use Biped system with Skin modifier, so you don't have to spend much time creating the rig.
Plan your game design adjusted to your possibilities: I mean, simple character models, without complex effects like hair, clothes and face expression morphs.
Since motion capture is expensive you can use reference videos inside your scene putting them in a plane's texture to help you creating animation keys.
Use MaxScript to solve repeating task. MaxScript is easy to learn. And there is lot of free plugins at: http://www.scriptspot.com/
There is lot of work involved you can't avoid if you want to create original content, unless you choose the expensive way:
The really fast quick approach is to use a service like: http://www.mixamo.com/
There you upload your model, auto-rig it and apply animation in less than 3 minutes each one. They have a database of motion captures and also provide custom motions.

Lightweight 3D animation driven by external data

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

HTML5 Canvas Performance: Loading Images vs Drawing

I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.

Resources