I'm trying to fit the human skeleton completely inside the human body then rotate both meshes, but I'm not getting the result expected. I need your help.
The human integument 3D model was obtained from MakeHuman, I then bought a different 3D human skeleton from elsewhere to fit it inside the human integument model. The skeleton model is significantly larger than the integument model, so I used Blender to scale down the skeleton. Within Blender, the skeleton fit nicely inside the integument.
My problems start when I integrate those two models into iOS.
First problem: As both the skeleton and integument models loaded, the skeleton mesh node still appear much bigger than the human integument although it was already scaled down via Blender. I had to scale it down again using Cocos3D's uniformScale property in order to fit it inside the integument model. Note that both mesh nodes are position at the exact location distance from the camera.
Second problem: As I rotate both mesh nodes, the skeleton mesh node began surfacing and bleed through the integument mesh node. Both has the exact same rotation vector and same origin.
Help is much needed and appreciated.
Thanks to Bill Hollings, this problem is solved by adding the skeleton as the child node of the integument model.
Related
This question already exists:
Change dimensions of Cubical Shower 3d model in unity 3D
Closed 2 years ago.
Is it possible to change width of any fbx model in 3D without changing its realistic look so that after changing its dimension, the model should not be stretched?
If 2 objects are placed beside each other then need to increase the size of one object and change position of other object with respect to first object.
Thanks in advance.
this breaks down to two problems, if you want to scale an object in just one dimension it will always stretch, for example your your table:
While the board looks fine the legs will get stretched and look unrealistic.
Now the question is what can you do?
It depends on your model.
First of all has your model only one mesh? or has every component a single mesh?
Preferably you want your components to have a independent mesh object. For your table it would be something like this:
This way you can only scale your board and then transform the position of your legs accordingly so that they fit to the new board size.
If you have only one mesh there is not a lot you can do in Unity. For that you would need to go into Blender or any other 3D modeling tool and split the components manually.
Now if you only stretched the board and your model has a texture you will notice that it will look stretched.
What can you do about that?
Go to your texture and first of all check the wrap mode
in this case we want it on repeat, after that we need to change the material setting
since we stretched the geometry we need to change the tiling, befor it was on y = 1 but we scaled the y dimensions so now we need to adapt this number aswell and make the texture repeat. For a table this is doable, if we for example work with more complex textures that have specific parts this will not work and you have to change the texture manually.
now the texture looks better but you probably will have abrupt color changes, this is because the texture is repeated, i "circled" it on the picture. For this problem you have to change the texture in a picture editing program and make it seamless.
I hope this helped a bit, i know this is only the basics and to get a perfect texture and image you have to put in a bit more work, but for that i would highly recommend to read a tutorial.
I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.
I am attempting to render a flat, dynamically created heatmap on top of a 3D model that is loaded from an OBJ (or STL).
I am currently loading and rendering an OBJ with Three.js. I have vector3 points that I am currently drawing as simple red cubes (image below). These data points are all raycasted to my OBJs mesh and are lying on the surface. The vector3 points are loaded from an external data source and will change depending on what data is being viewed/collected.
I would like to render my vector3 point data into a heatmap on the surface of my OBJ. Here are some examples illustrating the type of visual effects I am trying to achieve:
I feel like vertex coloring is the method of achieving this, but my issue is that my OBJ model does not have enough tessellation to do this. As you can see many red dots fall on each face. I am struggling to find a way to draw over my object's mesh with colors exactly where my red point data is. I was assuming I would need to convert my random vector3 points into a mesh, but cannot find a method to do so.
I've looked at the possibility of generating a texture, but 1) I do not have a UV map for my OBJs and do not see a way to programmatically generate them and 2) I am a bit lost on how I would correlate vector3 point data to UV points.
I've looked at using shaders, but my vector3 point data appears to be too large for using a shader (could be hundreds of thousands of points). I also feel it is not the right approach to render the heatmap every frame and would rather only render it once on load.
I've looked into isosurfaces with point clouds and the marching cubes algorithm, but I didn't think this was the right direction since only my data is a bit like a point cloud, and I am unsure as to how I would keep this smooth along the surface of my OBJ mesh.
Although I would prefer to keep everything in JavaScript for viewing in the browser, I am open to doing server side processing in any language/program with REST so long as it can be automated without human intervention, and pushed back to the browser for rendering.
Any suggestions or guidance is appreciated.
I'm only guessing but it seems like first you need to have UV coordinates that map every triangle to a texture. Rather than do this by hand I'd suggest using a modeling package. Most modeling packages have some way of automatically and uniformly mapping every triangle to a texture. For example in Blender
Next to put the heatmap in the texture by computing which triangles are affected by each dot (your raycasting), looking up their texture coordinates, projecting that dot into texture space and then putting the colors in that part of the texture. I'm only guessing that you need to not just do exact points but probably need to consider adjacent triangles since some heat info that hits near the edge of a triangle needs to bleed over into the adjacent triangle but that adjacent triangle might be using a completely different part of the texture.
How can I dress a human body?. I have imported the body model and t-shirt in two separated meshes. The human body includes shape keys.
But when I modify the morphTargetInfluences key of the body, the t-shirt doesn't fit in the new body shape.
How can I make the T-shirt fits when the key change the value?, How can I do that using three.js?
I'm using the version 1.4.0 of the Three.js exporter (three.js r71) and Blender 2.75a
The point is, your morph targets are only present in your character model and won't magically fit the cloth unfortunately. Apply the morph to the cloth too in your editing tool and morph both equally, this would work without extra effort.
I'm actually also working on a solution for wearable cloth, i'll give shared vertex buffers a try where the vertices "connects" to the vertices of the domain model with a relative offset, so you would just have to take care about assigning the cloth once, instead applying and exporting whole morph target sets at all.
The downside would be, your vertices has to stay the same, once you modify the mesh, you'd have to export all related cloth again. This can be basically solved by a automated process, like one which searches for nearest vertices, but cloth is usually extruded from the base mesh to perfectly "fit" without intersections, so this isn't really a surprising thing.
I'm new to OpenGL-ES and looking for the best approach for creating a realistic model of an eye whose pupil can dilate and constrict so I have a plan in mind while running through tutorials.
I've made a mesh in blender that is basically a sphere with a hole (the 'pole' or central vertex is removed and a couple surrounding circle edges).
I plan to add an iris texture directly to the sphere's polys surrounding the hole.
To change pupil size, do I just need a function to reposition the vertices of the hole so the hole dilates or contracts?
I'm going to use OpenGL within an Objective-C app. I have Jeff Lamarche's Objective C export script. Is it standard to export only the mesh from blender, and add textures in code later in xcode? Or is it easier/better to setup the textures on the meshes in blender first and export the more finished product's data to xcode?
Your question is a bit old, so I'm not sure how much progress you've made, but as I've been climbing up the learning curve myself I thought I'd take a shot at answering.
If you want to animate the individual vertices of your model, I believe the method you'll want is Vertex Skinning. I can't speak much on that front as I haven't yet had reason to experiment with it, although it's a technique only available in OpenGL ES 2.0. (Which is probably where you want to start anyway, the increased flexibility over 1.1 is more than worth any additional incline to the learning curve.)
The answer to your texturing question is somewhat mixed. You'll need to actually apply the texture in OpenGL. But what Blender can do for you is determine the texture coordinates. Each vertex of your mesh will have a texture coordinate associated with it. The texture coordinate will be X, Y coordinates which map to a location on the texture image. The coordinates are in a range from 0.0 to 1.0 -- so, since your image texture is a rectangle, the texture coordinate {0, 0} maps to the bottom left corner; {1 , 1} maps to the top right corner; {0.5, 0.5} maps to the exact center of the image.
So in blender, you'd want to go ahead and texture the object with UV mappings. When you export, although your exported mesh won't contain any of the image content, it will retain the texture coordinates which map to your image content. This will allow you to apply the texture in OpenGL so that the texture is applied the same way it appeared in blender.
I've personally had some trouble getting Jeff Lamarche's script to spit out the texture coordinates, as Blender api seems to change significantly with each release. I've had more success with an .obj converter. So I've been exporting from blender to .obj, and using a command line tool to go from .obj to a C header file.
If you encounter similar problems with Lamarche's script, this post might help solve it: http://38leinad.wordpress.com/2012/05/29/blender-2-6-exporting-uv-texture-coordinates/
And this is a good resource for a .obj to .h script:
http://heikobehrens.net/2009/08/27/obj2opengl/