I'm wondering why the mesh in lesson 10 looks more three-dimensional then mine. My meshes look like they have no surface and no depth. Here is an example picture:
Any suggestions? I don't see if there is a difference in loading the meshes (xtk's version compared to mine). I think it doesn't depend on the (type of) data because in ParaView it looks more three-dimensional.
It is because your mesh files have no normals.
Paraview will create normals if you don't provide it - not XTK -
You can generate normals for your meshes fairly easily with VTK:
http://www.vtk.org/doc/nightly/html/classvtkPolyDataNormals.html
1-vtkPolyDataReader
2-vtkPolyDataNormals
3-vtkPolyDataWriter
Maybe you can export the meshes from Paraview or Slicer? Maybe the exported meshes will contain the normals...
Related
I have been trying to instance some tree meshes in react-three-fiber and threejs,
this is what i have got so far: https://codesandbox.io/s/silly-sunset-74wmt?file=/src/App.js
The trees from one angle look see through, I am able to see the barks of ALL trees.
but the behavior is normal from the opposite angle.
To me it seems to be some issue with render order or meshes or the shader, not able to wrap my head around it.
I need the see-through thing to not happen and the set should look like how it looks like in the second picture from all angles
As suggested in the comments byDon McCurdy. I needed to use AlphaClip property on blender while exporting my mesh.
Found an answer here which was quite helpful in understanding the problems with threejs transparency at play
Updated with the solution: https://codesandbox.io/s/silly-sunset-74wmt?file=/src/App.js
I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.
I've got a question regarding the exporting of Blender scenes to be loaded into Three.js, with the focus on lighting.
We're using Blender to create our 3D environments, interiors in this case. In Blender the scenes look like they should. Here's an example I've put together with a single point light with Energy: 50 and Distance: 30. I've made these values so high so the problem is visible clearly inside Three.js. Here is a screen from blender:
Now, when exported using the Three.js exporter for Blender and imported using the SceneLoader, the result in Three.js is:
Don't might the ugly brightness, the problem is the fact that it lights only parts of the scene. It looks like Three.js incorrectly lights different triangles of an object. Our 3D artist makes the objects in Blender using quads.
To make sure the problem lies not within the exporting and importing process, I've created a PointLight within Three with the same position, distance and brightness. This gives the exact same result as above.
I've tried using different lights as well. So far only the Sun light (Directional in Three) seems to give the correct result. The other lights don't work at all when exported from blender, but that is a problem outside of the scope of this post.
My question is: Is it in fact the triangles Three.js creates that cause the problem? Would making the triangles in Blender to begin with fix this problem, or is there a different approach that might fix the problem?
EDIT: Using Phong mats fixed it, but lighting seems incorrectly divided over objects individually:
I am building quite a complex 3D environment in Three.js (FPS-a-like). For this purpose I wanted to structure the loading of textures and materials in an object oriƫnted way. For example; materials.wood.brownplank is a reusable material with a certain texture and other properties. Below is a simplified visualisation of the process where models uses materials and materials uses textures.
loadTextures();
loadMaterials();
loadModels();
//start doing stuff in the scene
I want to use that material on differently sized objects. However, in Three.js you can't (AFAIK) set a certain texture scale. You will have to set the repeat to scale it appropiate to your object. But I don't want to do that for every plane of every object I use.
Here is how it looks now
As you can see, the textures are not uniform in size.
Is there an easy way achieve this? So cloning the texture and/or material every time and setting the repeat according to the geometry won't do :)
I hope someone can help me.
Conclusion:
There is no real easy way to do this. I ended up changing my loading methods, where things like materials.wood.brownplank are now for example getMaterial('wood', 'brownplank') In the function new objects are instantiated
You should be able to do this by modifying your geometry UV coordinates according to the "real" dimensions of each face.
In Three.js, UV coordinates are relative to the face and texture (as in, 0.0 = one edge, 1.0 = other edge), no matter what the actual size of texture or face is. But by modifying the UVs in geometry (multiply them by some factor based on face physical size), you can use the same material and texture in different sizes (and orientations) per face.
You just need to figure out the mapping between UVs, geometry scale and your desired working units (eg. mm or m). Sorry I don't have, or know a ready algorithm to do it, but that's the approach you probably need to take. Should be quite doable with a bit of experimentation and google-fu.
I'm new to OpenGL-ES and looking for the best approach for creating a realistic model of an eye whose pupil can dilate and constrict so I have a plan in mind while running through tutorials.
I've made a mesh in blender that is basically a sphere with a hole (the 'pole' or central vertex is removed and a couple surrounding circle edges).
I plan to add an iris texture directly to the sphere's polys surrounding the hole.
To change pupil size, do I just need a function to reposition the vertices of the hole so the hole dilates or contracts?
I'm going to use OpenGL within an Objective-C app. I have Jeff Lamarche's Objective C export script. Is it standard to export only the mesh from blender, and add textures in code later in xcode? Or is it easier/better to setup the textures on the meshes in blender first and export the more finished product's data to xcode?
Your question is a bit old, so I'm not sure how much progress you've made, but as I've been climbing up the learning curve myself I thought I'd take a shot at answering.
If you want to animate the individual vertices of your model, I believe the method you'll want is Vertex Skinning. I can't speak much on that front as I haven't yet had reason to experiment with it, although it's a technique only available in OpenGL ES 2.0. (Which is probably where you want to start anyway, the increased flexibility over 1.1 is more than worth any additional incline to the learning curve.)
The answer to your texturing question is somewhat mixed. You'll need to actually apply the texture in OpenGL. But what Blender can do for you is determine the texture coordinates. Each vertex of your mesh will have a texture coordinate associated with it. The texture coordinate will be X, Y coordinates which map to a location on the texture image. The coordinates are in a range from 0.0 to 1.0 -- so, since your image texture is a rectangle, the texture coordinate {0, 0} maps to the bottom left corner; {1 , 1} maps to the top right corner; {0.5, 0.5} maps to the exact center of the image.
So in blender, you'd want to go ahead and texture the object with UV mappings. When you export, although your exported mesh won't contain any of the image content, it will retain the texture coordinates which map to your image content. This will allow you to apply the texture in OpenGL so that the texture is applied the same way it appeared in blender.
I've personally had some trouble getting Jeff Lamarche's script to spit out the texture coordinates, as Blender api seems to change significantly with each release. I've had more success with an .obj converter. So I've been exporting from blender to .obj, and using a command line tool to go from .obj to a C header file.
If you encounter similar problems with Lamarche's script, this post might help solve it: http://38leinad.wordpress.com/2012/05/29/blender-2-6-exporting-uv-texture-coordinates/
And this is a good resource for a .obj to .h script:
http://heikobehrens.net/2009/08/27/obj2opengl/