Importing mesh from blender causes distortion - three.js

I'm trying to load a skinned mesh from blender into a THREE.js scene, but it... looks a little odd:
There is a skeleton that was loaded from the animation only, the mesh modified for the game, and there is a little one that is directly loaded from the three.js ObjectLoader.
It's supposed to look more like this (from blender):

I found it!
when exporting from blender, I modified the number of influences to 4. It appears that some vertices were being modified by more than one bone and therefore when only two influencing bones were exported it distorted.

Related

Three js Explode Modifier with Material

Trying to mimic the behaviour of this site.
http://experience.mausoleodiaugusto.it/it/chapter-02
I can successfully load in a dae model with a texture and have this displayed in the 3JS scene. I have also played around a bit with the tesselate and explode modifiers with reasonable success however the method I am using takes the dae geometry, creates faces from this, which are assigned a colour and turned into new mesh and added to the scene alongside my original dae model. So I end up with two objects - my original dae model with texture and the new exploded mesh with coloured faces
How would I map the texture from the dae file to each individual face in the new mesh so that when it explodes it looks accurate as per the example - or am I going about this the wrong way? Perhaps each face needs its inidvidual texture already mapped in the dae file?
Any suggestions would be welcome.

Convert THREE.js scene to CANNON.js world

I have a basic THREE.js scene, created in Blender, including cubes and rotated planes. Is there any way that I can automatically convert this THREE.js scene into a CANNON.js world ?
Thanks
Looking at the Three.js Blender exporter, it looks like it only exports mesh data, no information about mathematical shapes (boxes, planes, spheres etc) that Cannon.js needs to work. You could try to import your meshes directly into Cannon.js, using its Trimesh class, but this would sadly only work for collisions against spheres and planes.
What you need to feed Cannon.js is mathematical geometry data, telling it which of your triangles in your mesh that represent a box (or plane) and where its center of mass is.
A common (manual) workflow for creating 3D WebGL physics, is importing the 3D models into a WebGL-enabled game engine (like Unity, Goo Create or PlayCanvas). In the game engine you can add collider shapes to your models (boxes, planes, spheres, etc), so the physics engine can work efficiently. You can from there preview your physics simulation and export a complete WebGL experience.
Going to post another answer since there are a few new options to consider here...
I wrote a simple mesh2shape(...) helper that can convert (one object at a time) from THREE.Mesh to CANNON.Shape objects. It doesn't support certain features, such as heightmaps/terrain.
Example:
var shape = mesh2shape(object3D, {type: mesh2shape.Type.BOX})
There is a (experimental!) BLENDER_physics extension for the glTF format to include physics data with a model. You could add physics data in Blender, export to glTF, and then modify THREE.GLTFLoader to pass along the physics data to your application, helping you construct CANNON.js objects.

Three.js scene lighting from blender - Issues with distance and triangles

I've got a question regarding the exporting of Blender scenes to be loaded into Three.js, with the focus on lighting.
We're using Blender to create our 3D environments, interiors in this case. In Blender the scenes look like they should. Here's an example I've put together with a single point light with Energy: 50 and Distance: 30. I've made these values so high so the problem is visible clearly inside Three.js. Here is a screen from blender:
Now, when exported using the Three.js exporter for Blender and imported using the SceneLoader, the result in Three.js is:
Don't might the ugly brightness, the problem is the fact that it lights only parts of the scene. It looks like Three.js incorrectly lights different triangles of an object. Our 3D artist makes the objects in Blender using quads.
To make sure the problem lies not within the exporting and importing process, I've created a PointLight within Three with the same position, distance and brightness. This gives the exact same result as above.
I've tried using different lights as well. So far only the Sun light (Directional in Three) seems to give the correct result. The other lights don't work at all when exported from blender, but that is a problem outside of the scope of this post.
My question is: Is it in fact the triangles Three.js creates that cause the problem? Would making the triangles in Blender to begin with fix this problem, or is there a different approach that might fix the problem?
EDIT: Using Phong mats fixed it, but lighting seems incorrectly divided over objects individually:

Different mesh visualization WebGL

I'm wondering why the mesh in lesson 10 looks more three-dimensional then mine. My meshes look like they have no surface and no depth. Here is an example picture:
Any suggestions? I don't see if there is a difference in loading the meshes (xtk's version compared to mine). I think it doesn't depend on the (type of) data because in ParaView it looks more three-dimensional.
It is because your mesh files have no normals.
Paraview will create normals if you don't provide it - not XTK -
You can generate normals for your meshes fairly easily with VTK:
http://www.vtk.org/doc/nightly/html/classvtkPolyDataNormals.html
1-vtkPolyDataReader
2-vtkPolyDataNormals
3-vtkPolyDataWriter
Maybe you can export the meshes from Paraview or Slicer? Maybe the exported meshes will contain the normals...

Basic approach to pupil constriction/dilation of eye model in OpenGL

I'm new to OpenGL-ES and looking for the best approach for creating a realistic model of an eye whose pupil can dilate and constrict so I have a plan in mind while running through tutorials.
I've made a mesh in blender that is basically a sphere with a hole (the 'pole' or central vertex is removed and a couple surrounding circle edges).
I plan to add an iris texture directly to the sphere's polys surrounding the hole.
To change pupil size, do I just need a function to reposition the vertices of the hole so the hole dilates or contracts?
I'm going to use OpenGL within an Objective-C app. I have Jeff Lamarche's Objective C export script. Is it standard to export only the mesh from blender, and add textures in code later in xcode? Or is it easier/better to setup the textures on the meshes in blender first and export the more finished product's data to xcode?
Your question is a bit old, so I'm not sure how much progress you've made, but as I've been climbing up the learning curve myself I thought I'd take a shot at answering.
If you want to animate the individual vertices of your model, I believe the method you'll want is Vertex Skinning. I can't speak much on that front as I haven't yet had reason to experiment with it, although it's a technique only available in OpenGL ES 2.0. (Which is probably where you want to start anyway, the increased flexibility over 1.1 is more than worth any additional incline to the learning curve.)
The answer to your texturing question is somewhat mixed. You'll need to actually apply the texture in OpenGL. But what Blender can do for you is determine the texture coordinates. Each vertex of your mesh will have a texture coordinate associated with it. The texture coordinate will be X, Y coordinates which map to a location on the texture image. The coordinates are in a range from 0.0 to 1.0 -- so, since your image texture is a rectangle, the texture coordinate {0, 0} maps to the bottom left corner; {1 , 1} maps to the top right corner; {0.5, 0.5} maps to the exact center of the image.
So in blender, you'd want to go ahead and texture the object with UV mappings. When you export, although your exported mesh won't contain any of the image content, it will retain the texture coordinates which map to your image content. This will allow you to apply the texture in OpenGL so that the texture is applied the same way it appeared in blender.
I've personally had some trouble getting Jeff Lamarche's script to spit out the texture coordinates, as Blender api seems to change significantly with each release. I've had more success with an .obj converter. So I've been exporting from blender to .obj, and using a command line tool to go from .obj to a C header file.
If you encounter similar problems with Lamarche's script, this post might help solve it: http://38leinad.wordpress.com/2012/05/29/blender-2-6-exporting-uv-texture-coordinates/
And this is a good resource for a .obj to .h script:
http://heikobehrens.net/2009/08/27/obj2opengl/

Resources