THREE.js - GLTF morph anims with skeletal anims playback problems - three.js

I've recently converted a bunch of JSON models to GLTF for playback in Three.js (exported from Blender 2.79b).
The models consist of multiple bone anims (idle, walk, run, attack, etc.) which I can access, playback, and fade between without problem.
They also contain shape keys with facial morphs for expressions and speech - 33 exist per model on average.
I have been unsuccessful at getting any of the facial morphs to play back properly.
When I keyframe a single shape key in Blender, for example in order to access facial morph number 31 out of 32 on the Dope Sheet, then animate it over time, export and play it back via the browser, it defaults to playing back whatever the first shape key is in the list. So I feel like I'm close, a facial morph is animating, just not the right one.
dopesheet
I have been struggling with figuring out how to access the proper morph. Coding is obviously not my greatest strength, but I can usually work off an example if I can find one doing something similar, in this case something that demonstrates how to access morph anim clips and then play them back alongside bone anims.
Any suggestions would be most appreciated.

Related

GLTF anim and morph playback issue when using three.js with more than one mixer

I have a single gltf file, exported from Blender with 6 anims and 20 morph targets. When that's the only skinned gltf object in the scene, everything plays nicely - I can switch between bone anims (run, walk, idle, etc), and get all morph anims (for facial expressions) cycling on a timer, or triggered by events. Yay.
The problem is when I introduce a second skinned object, such as an NPC. At that point lots of weirdness starts to happen.
For example, when morph targets cycle expressions on/off the player object, the NPC model standing nearby scales down and disappears on the off cycle, then scales back up during the on cycle. Another example, at init time the NPC object might randomly turn into an instance of another loaded object (a tree or a building), or occasionally a mini version of some random object, at 10% normal scale, and then start rapidly bouncing around in unpredictable and inconsistent ways. I have no idea what's going on.
I thought it might have something to do with loading multiple mixers, but then that's what the docs state should be done - "When multiple objects in the scene are animated independently, one AnimationMixer may be used for each object." Unless I'm reading that wrong?
I'm using:
npcMixer = new THREE.AnimationMixer(npc);
virtually the same as what I do for the player:
playerMixer = new THREE.AnimationMixer(player);
Is this a bad/mistaken approach?
Perhaps worth noting: I had FBX versions of the player and NPC working just fine together when exported and accessed as individual files. Then I spent a lot of time converting to GLTF since it's faster and lets me wrap all the actions up in a single file, which the FBX exporter does not seem to support (If I'm wrong about FBX being able to export multiple actions in a single file for playback in the three.js context please let me know!).
Three.js r98
Blender 2.79
Thanks for any advice.

Inward facing 360 degree photo

Problem statement:
I want to grab a smartphone, take a series of photos (or a video) of an object, and convert it to a 360 degree photo.
Some Research:
If we look at Facebook 360 Photos, this is exactly what I'm looking for, except that Facebook's solution is outward-facing 360 photos, and I'm looking for inward facing 360 photos.
This objective seems to be similar to 360 degree product photography. Important difference: I do not want to use any special equipment other than a smartphone. Just like you can create a 360 degree outward facing photo without needing a tripod or a turntable.
I want to understand from the community:
Does a solution like this exist? What's the best we can do at the moment?
What kind of technological expertise would a person require to create something like this? Consider yourself an investor or a CEO who needs to get this built. Who do you hire? Who do you consult?
Thanks a lot for the help.
There is a fundamental difference between the two cases:
"Outward": the translation of the camera is small. If the scene is far enough away, it can be ignored, and the camera motion can be approximated with a rotation about its focal point (there is almost no parallax between views). The mapping from one image to any other image is well approximated a homography, and the image set maps naturally to the inner surface of a sphere (or, aproximately, a cylinder, a cube, etc.). A scene far away will also appear to move slowly, therefore capture time is less of a factor when stitching images.
"Inward": the translation is large and cannot be ignored. There is parallax, the scene objects may self-occlude or mutually occlude each other in some of the images, making "stitching" highly nontrivial - mapping of one image onto the other depends on the scene content, unlike the outward case. If the content of the scene moves, stitching becomes an even harder problem.
In both cases, however, one normally relies on bundle adjustment for the final refinement of the camera poses/positions. In the second case the 3D geometry of the scene may need to be reconstructed, depending on the application.
To your questions:
Of course a solution exists: have you seen "The Matrix" with its "bullet-time" effect? Doing a google search of "bullet time" shows several more or less successful attempts at reproduction - the easiest involves tying an iPhone on a string and swinging it around.
Someone with background and expertize in photogrammetry, 3D computer vision (roughly, they have read and internalized Hartley & Zisserman's book or equivalents), and nontrivial image processing - there is some art involved in stitching correctly once you have solved the photogrammetry, it's not "just graphcut it and then multiband-blend it"

Make a mesh unprintable, but still viewable with three.js

Is there a way to make a mesh unprintable with a 3D printer, but still viewable with three.js.
Motivation is that I want to show users a preview of a mesh before he can buy it. But as the JS code is viewable he could download it without paying for it. Degrading the quality of the preview mesh would be a way, but as the quality of the mesh is a selling point I would like to avoid that.
My idea was to add some kind of triangulation defects which would prevent the printing of the mesh, but which would not prevent threejs from showing the mesh.
Tools like Netfabb or Meshlab should also not be able to automatically repair the mesh.
Is there something like a bad sector copy protection equivalent for 3d models?
Just a few ideas.
1) Augment your shaders to ignore some interval of vertices from the buffer (like every 3rd or something). In this way you can add "garbage" to the model file so it can not be lifted easily from the network.
2) Once in the buffer it can still be pulled out with a savvy user, unless you split the model up into many chunks and render out of order or only render the front half of the model making it less useful for 3D printing. One could also render in split views or using stereoscopic interlaced with a separation of zero.
3) Only render a none symmetrical half of your model with an camera control locked to that half :P
Kinda wonky, a ton of work to implement, and still someone will find a way I'm sure. But that's my two cents worth anyway, hope it helps.
I've seen some online shops preview with renders taken from each 10-30 degrees around the model. That way you only pass the resulting image, not the model.
why not show a detailed HD video of your model?
If the mesh is non-manifold it will not print.
a) Render serverside, stream results in an interactive video
b) destroy the mesh while still keeping the normals intact for shading. You can randomly flip faces, render with double sided. You can "extrude" edges to mess up topology. As long as you map the normals correctly, it will shade without any of these defects affecting it.

Animation and Instancing performances

talking about the storage and loading of models and animations, which would be better for a Game Engine:
1 - Have a mesh and a bone for each model, both in the same file, each bone system with 10~15 animations. (so each model has its own animations)
2 - Have alot of meshes and a low number of bones, but the files are separated from each other and the same bone (animations too) can be used for more then one mesh, each bone set can have alot of animations. (notice that in this case, using the same boneset and the same animations will cause a loss of uniqueness).
And now, if I need to show 120~150 models in each frame (animated and skinned by the GPU), 40 of them are the same type, is better:
1 - Use a instancing system for all models in the game, even if I only need 1 model for each type.
2 - Detect wich model need instancing (if they repeat more then one time) and use a diferent render system (other shader programs), use a non-instancing for the other models.
3 - Dont use instancing because the "gain" would be very low for this number of models.
All the "models" talked here are animated models, currently I use the MD5 file with GPU skinning but without instancing, and I would know if there are better ways to do all the process of animating.
If someone know a good tutorial or can put me on the way... I dont know how I could create a interpolated skeleton and use instancing for it, let me explain..:
I can compress all the bone transformations (matrices) for all animation for all frames in a simple texture and send it to the vertex shader, then read for each vertex for each model the repective animation/frame transformation. This is ok, I can use instancing here because I will always send the same data for the same model type, but, when I need to use a interpolate skeleton, should I do this interpolation on vertex shader too? (more loads from the texture could cause some lost of performance).
I would need calculate the interpolated skeleton on the CPU too anyway, because I need it for colision...
Any solutions/ideas?
Im using directX but I think this applies to other systems
=> Now I just need an answer for the first question, the second is solved (but if anyone wants to give any other suggestions its ok)
The best example I can think of and one I have personally used is one by NVidia called Skinned Instancing. The example describes a way to render many instances of the same bone mesh. There is code and a whitepaper available too :)
Skinned Instancing by NVidia

Add animation to head and arm (mesh) which is acquired from 3D scanner Kinect

I used ReconstructMe
to scan my first half body (arm and head). The result I got is a 3d mesh. I open them in 3dsmax. What I need to do now is to add animation/motion to the 3d arm and head.
I think ReconstructMe created a mesh. Do I need to convert that mesh to a 3d object before adding animation? If so, how to do it?
Do I need to seperate the head and arm to add different animation to them? How to do it?
I am a beginner in 3ds max. I am using 3ds max 2012, student edition.
Typically you would set up bones, and link the mesh to the bones with skin or physique modifier, then animate the bones as needed.
You can have 1 mesh, or separate meshes, depends on your needs.
For setting up the rigging, it would be good to utilize a tutorial like this
http://www.digitaltutors.com/11/training.php?pid=332
I find Digital Tutors to be very concise and detailed for anybody to grasp the concepts if your patient enough. Depending on the motion you will like some parts of the bones will require FK (forward kinematics) or IK (inverse kinematics) or a mixture of both FK/IK control in areas like the elbows of the arms etc.
Certain other parts of the character would also like the ability to utilize CAT controls. Through the whole rigging process the biggest foundation or theory to maintain is hierarchy and the process of parenting the controls/linking correctly.
Also your meshes topo needs to be correct, when scanning from an outside source you will get either a. a lot of triangles or b. bad edge flow, before the rigging process make sure to take the time to get your scan's topology to the correct state it should be in.

Resources