Print a 3D object animation in 2D - unityscript

Here what I need to do is print the 3D animation transitions in my animation clip in 2D format. For example printing a human animation walking on paper. Is there a way to do this?

There are several aspects to consider:
Splitting the animation up into frames
Rendering the frame
Retrieving the rendered frame
Splitting the animation up into frames
Before you render a frame, you have to decide, which frame it should be. Depending on your requirements you may want to render only keyframes (frames that have been defined by the animation artist or the mocap data, but may not be evenly distributed over the duration of the animation), or to render frames that are evenly distributed over the duration of the animation (i.e. render one frame every 0.2 seconds).
Once you know the frame, you can tell the AnimationController to jump to that specific frame and update the 3D model accordingly.
Rendering the frame
Again, depending on your requirements, you may want to consider using a Camera with an orthographic projection. This will remove any perspective distortions, making it easier to visually analyze the animation. If of course the goal is a more visually appealing or "artistic" presentation, a perspective projection may be desired as well.
Retrieving the frame
As you want to print out the rendering, you need to retrieve the result of the rendering. You can do this either by letting Unity take a screenshot (Application.CaptureScreenshot), or by having a RenderTexture and setting this as the render target of your Camera. You can then copy the contents of the RenderTexture to a Texture2D (Texture2D.ReadPixels()) and save that to disk (Texture2D.EncodeToPNG/JPG())

Related

Reusing parts of the previous frame when drawing 2D with WebGL

I'm using WebGL to do something very similar to image processing. I draw a single Quad in orthoscopic projection, and perform some processing in the fragment shaders. There are two steps, in the first one the original texture from the Quad is processed in the fragment shader and written to a framebuffer, a second step processes that data to the final canvas.
The users can zoom and translate the image. For this to work smoothly, I need to hit 60 fps, or this gets noticeably sluggish. This is no issue on desktop GPUs, but on mobile devices with much weaker hardware and higher resolutions this gets problematic.
The translation case is the most noticeable and problematic, the user drags the mouse pointer or their finger over the screen and the image is lagging behind. But translation is also a case where I could theoretically reuse a lot of data from the previous frame.
Ideally I'd copy the canvas from the last frame, translate it by x,y pixels, and then only do the whole image processing in fragment shaders on the parts of the canvas that aren't covered by the translated previous canvas.
Is there a way to do this in WebGL?
If you want to access the previous frame you need to draw to a texture attached to a framebuffer. Then draw that texture into the canvas translated.

How to add motion blur (non-zero exposure time rendering) in Three.js?

I am trying to achieve this effect:
https://dl.dropboxusercontent.com/u/8554242/dmitri/projects/MotionBlurDemo/MotionBlurDemo.html
But I need it applied to my Three.js scene, specifically on a Point Cloud Material (particles) or the individual particles.
Any help greatly appreciated!
if you want the "physically correct" approach then
create a FIFO of N images.
inside each scene redraw (assuming constant fps)
if FIFO already full throw out the oldest image
put the raw rendered scene image in the FIFO
blend all the images in the FIFO together
If you have big N then to speed things up You can store also the cumulative blend image off all images inside FIFO. Adding inserted image to it and substracting removing image from it. But the target image must have enough color bits to hold the result. In such case you render the cumulative image divided by N.
render the blended image to screen
For constant fps is the exposure time t=N/fps. If you do not have constant fps then you need to use variable size FIFO and Store the render time along with image. If sum of render times of images inside FIFO exceeds the exposure time throw oldest image out...
This approach requires quite a lot of memory (the images FIFO) but does not need any additional processing. Most blur effects fake all this inside geometry shader or by CPU by blurring or rendering differently the moving object which affect performance and sometimes is a bit complicated to render.

3D Max generate objects from animation frames, copy frames to viewport

Is it possible to copy all animation frames into separate objects in the viewport?
I am already using the Path Deformation and the Array tools, but they cannot (as far I know) animate materials. Also, their output cannot be edited with the curve editor??
Example:
I have a 30 frame animation of a rotating box moving along a path. Instead I would like 30 boxes in the viewport. Each one, a copy of its respective keyframe.
Sort of like the classic video technique of creating trails from moving objects. I know it can be done in After Effects, but I want to actual 3d models from my own custom animation frames, in the scene and not the results from Path Deformation and Array. Then I can work on them as a still image.
Select the object and use this maxscript code:
for i = 1 to 30 do at time i snapshot $
It will create a collapsed copy of the mesh at each frame.

What are the pros and cons of a sprite sheet compared to an image sequence?

I come from a 2D animation background and so when ever I us an animated sequence I prefer to use a sequence of images. To me this makes a lot of sense because you can easily export the image sequence from your compositing/editing software and easily define the aspect.
I am new to game development and am curious about the use of a sprite sheet. What are the advantages and disadvantages. Is file size an issue? - to me it would seem that a bunch of small images would be the same as one massive one. Also, defining each individual area of the sprites seems time cumbersome.
Basically, I dont get why you would use a sprite sheet - please enlighten me.
Thanks
Performance is better for sprite sheets because you have all your data contained in a single texture. Lets say you have 1000 sprites playing the same animation from a sprite sheet. The process for drawing would go something like.
Set the sprite sheet texture.
Adjust UV's to show single frame of animation.
Draw sprite 0
Adjust UV's
Draw sprite 1
.
.
.
Adjust UV's
Draw sprite 998
Adjust UV's
Draw sprite 999
Using a texture sequence could result in a worst case of:
Set the animation texture.
Draw sprite 0
Set the new animation texture.
Draw sprite 1
.
.
.
Set the new animation texture.
Draw sprite 998
Set the new animation texture.
Draw sprite 999
Gah! Before drawing every sprite you would have to set the render state to use a different texture and this is much slower than adjusting a couple of UV's.
Many (most?) graphics cards require power-of-two, square dimensions for images. So for example 128x128, 512x512, etc. Many/most sprites, however, are not such dimensions. You then have two options:
Round the sprite image up to the nearest power-of-two square. A 16x32 sprite becomes twice as large with transparent pixel padding to 32x32. (this is very wasteful)
Pack multiple sprites into one image. Rather than padding with transparency, why not pad with other images? Pack in those images as efficiently as possible! Then just render segments of the image, which is totally valid.
Obviously the second choice is much better, with less wasted space. So if you must pack several sprites into one image, why not pack them all in the form of a sprite sheet?
So to summarize, image files when loaded into the graphics card must be power-of-two and square. However, the program can choose to render an arbitrary rectangle of that texture to the screen; it doesn't have to be power-of-two or square. So, pack the texture with multiple images to make the most efficient use of texture space.
Sprite sheets tend to be smaller
files (since there's only 1 header
for the whole lot.)
Sprite sheets load quicker as there's
just one disk access rather than
several
You can easily view or adjust multiple frames
at once
Less wasted video memory when you
load the whole lot into one surface
(as Ricket has said)
Individual sprites can be delineated by offsets (eg. on an implicit grid - no need to explicitly mark or note each sprite's position)
There isn't a massive benefit for using sprite sheets, but several small ones. But the practice dates back to a time before most people were using proper 2D graphics software to make game graphics so the artist workflow wasn't necessarily the most important thing back then.

use core-image in 3d

i have a working Core Video setup (a frame captured from a USB camera via QTKit) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView. so far so good but i would like to use some Core Image filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef) with Core Image and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!
Try the GPUImage! It's easy to use, and faster than CoreImage processing. It uses predefined, or custom shaders (GLSL)

Resources