Add animation to head and arm (mesh) which is acquired from 3D scanner Kinect - animation

I used ReconstructMe
to scan my first half body (arm and head). The result I got is a 3d mesh. I open them in 3dsmax. What I need to do now is to add animation/motion to the 3d arm and head.
I think ReconstructMe created a mesh. Do I need to convert that mesh to a 3d object before adding animation? If so, how to do it?
Do I need to seperate the head and arm to add different animation to them? How to do it?
I am a beginner in 3ds max. I am using 3ds max 2012, student edition.

Typically you would set up bones, and link the mesh to the bones with skin or physique modifier, then animate the bones as needed.
You can have 1 mesh, or separate meshes, depends on your needs.

For setting up the rigging, it would be good to utilize a tutorial like this
http://www.digitaltutors.com/11/training.php?pid=332
I find Digital Tutors to be very concise and detailed for anybody to grasp the concepts if your patient enough. Depending on the motion you will like some parts of the bones will require FK (forward kinematics) or IK (inverse kinematics) or a mixture of both FK/IK control in areas like the elbows of the arms etc.
Certain other parts of the character would also like the ability to utilize CAT controls. Through the whole rigging process the biggest foundation or theory to maintain is hierarchy and the process of parenting the controls/linking correctly.
Also your meshes topo needs to be correct, when scanning from an outside source you will get either a. a lot of triangles or b. bad edge flow, before the rigging process make sure to take the time to get your scan's topology to the correct state it should be in.

Related

Custom renderer and lights in Threejs for wireless raytracing?

I have been using Three.js for a couple of weeks now and am really blown away by the power of this library!
Now, I would like to leverage it in order to run raytracing simulations for lower frequencies than light (RF). I figure it should be possible to alter or create new lights that would represent the sources of RF waves and then write a specific renderer that would take into account the interference aspects that apply in the RF context.
Is that the right approach? If so what functions/libraries should I focus on? Maybe all the raytracing/raycasting is already done and can be reused with little modification?
Any help appreciated!
Thanks
M.
Yes depending on what domain you are modelling, radiometric simulations can be done with ray tracing. Photometric renderers are Radiometric renderers but with the domain restricted to the visible spectrum. Usually you can simply replace words like 'Lux' with 'Flux' which can give the indication that the domain being modelled can be outside the visible spectrum.
Bear in mind, depending on what you wish model, you need to assume that the RF waves will behave in a similar vein to a ray/photon. i.e Diffraction or movement through vector fields will require intelligence beyond standard path tracing, instead look at ray marching used in volumetric renderers. Volumetric ray tracing is more akin to finite element analysis since we don't just calculate the intersection point of the ray, but also consider the ray propagation through a medium (vector field) and the effect this medium has on the resultant ray path.
P.S Sorry can't help with specifics of Threejs :( Never used it.

Is there a way to create simple animations "on the fly" in modern OpenGL?

I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.

Make a mesh unprintable, but still viewable with three.js

Is there a way to make a mesh unprintable with a 3D printer, but still viewable with three.js.
Motivation is that I want to show users a preview of a mesh before he can buy it. But as the JS code is viewable he could download it without paying for it. Degrading the quality of the preview mesh would be a way, but as the quality of the mesh is a selling point I would like to avoid that.
My idea was to add some kind of triangulation defects which would prevent the printing of the mesh, but which would not prevent threejs from showing the mesh.
Tools like Netfabb or Meshlab should also not be able to automatically repair the mesh.
Is there something like a bad sector copy protection equivalent for 3d models?
Just a few ideas.
1) Augment your shaders to ignore some interval of vertices from the buffer (like every 3rd or something). In this way you can add "garbage" to the model file so it can not be lifted easily from the network.
2) Once in the buffer it can still be pulled out with a savvy user, unless you split the model up into many chunks and render out of order or only render the front half of the model making it less useful for 3D printing. One could also render in split views or using stereoscopic interlaced with a separation of zero.
3) Only render a none symmetrical half of your model with an camera control locked to that half :P
Kinda wonky, a ton of work to implement, and still someone will find a way I'm sure. But that's my two cents worth anyway, hope it helps.
I've seen some online shops preview with renders taken from each 10-30 degrees around the model. That way you only pass the resulting image, not the model.
why not show a detailed HD video of your model?
If the mesh is non-manifold it will not print.
a) Render serverside, stream results in an interactive video
b) destroy the mesh while still keeping the normals intact for shading. You can randomly flip faces, render with double sided. You can "extrude" edges to mess up topology. As long as you map the normals correctly, it will shade without any of these defects affecting it.

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

3d model construction using multiple images from multiple points (kinect)

is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.

Resources