a have a model in maya/blender, which has multiple UV's.
I thought that the .mtl has all the info about materials/textures ( as i can see the links in the .mtl ), but apperently i have to link every texture to an object # src="texture.jpg".
Is there any other way than combining those textures in photoshop/gimp, or breaking my model into separate .obj's having their own texture ?
Should i look more into the custom shading options in aframe/three.js # registerShader ?
The OBJ/MTL format does not support multiple UV sets. It may not support multiple materials on the same geometry either, I'm not sure. FBX and Collada do support multiple UVs, so you could try one of those.
But, searching for "threejs multiple UVs" shows that it is not easy to do multiple UVs without custom shaders, even once you have a newer model format. I would maybe try to bake your multiple UVs down into a single set in the modeling software, if that's possible.
MTL files can associate different texture maps with different material groups in the OBJ file, but the OBJ file can only describe a single set of UVs per poly face. Whether or not your OBJ writer or THREE's OBJ reader supports it is a different matter.
On a side note: the actual Wavefront OBJ spec is interesting in that it supported all kinds of things no one implemented after 1999 or so, including NURBS patches with trim curves, and 1D texture maps (essentially LUTs)
https://en.wikipedia.org/wiki/Wavefront_.obj_file
Related
Coming from the Flash background, I'm used to creating a fragment shader in the following manner:
filters = [];
filters.push(new BasicFilter(SomeTexture));
filters.push(new NormalMapFilter(SomeOtherTexture));
myShader = new Shader(filters);
As a result, I could combine a number of effects freely and easily without the need of writing a large separate shader each time.
In case of threejs, I noticed that for complex visual effects, a single shader is written, like here: http://threejs.org/examples/#webgl_materials_bumpmap_skin
Is it possible to write e.g. a bump map shader and environment map shader separately and then combine them dynamically when needed? What would be the most proper way of doing this?
It is not possible directly in three.js. Three creates shaders by passing the shader code as a string directly to a WebGL shader object to compile. There is no mechanism to automatically build complex shaders, you have to write your own. Three.js conveniently adds a couple of uniforms/attributes to the shader, but you have to write what's done with them.
You can use Effect Composer now to combine shaders
I'm currently trying to create a three.js mesh which has a large number of faces (in the thousands) and is using textures. However, my problem is that each face can have its texture changed at runtime, so potentially it's possible that every face has a different texture.
I tried preloading a materials array (for MeshFaceMaterial) with default textures and assigning each face a different materialIndex, but that generated much lag.
A bit of research led to here, which says
If number is large (e.g. each face could be potentially different), consider different solution, using attributes / textures to drive different per-face look.
I'm a bit confused about how shaders work, and in particular I'm not even sure how you would use textures with attributes. I couldn't find any examples of this online, as most texture-shader related examples I found used uniforms instead.
So my question is this: Is there an efficient way for creating a mesh with a large number of textures, changeable at runtime? If not, are there any examples for the aforementioned attributes/textures idea?
Indeed, this can be a tricky thing to implement. Now I can't speak much to GLSL (I'm learning) but what I do know is Uniforms are constants and would not change between calls, so you would likely want an attribute for your case, but I welcome being wrong here. However, I do have a far simpler suggestion.
You could use 1 texture that you can "subdivide" into all the tiny textures you need for each face. Then at runtime you can pull out the UV coordinates from the texture and apply it to the faces individually. You'll still deal with computation time, but for a thousand or so faces it should be doable. I tested with a 25k face model and it was quick changing all faces per tick.
Now the trick is navigating the faceVertexUvs 3 dimensional array. But for example a textured cube with 12 faces you could say reset all faces to equal one side like so:
for (var uvCnt = 0; uvCnt < mesh.geometry.faceVertexUvs[0].length; uvCnt+=2 ) {
mesh.geometry.faceVertexUvs[0][uvCnt][0] = mesh.geometry.faceVertexUvs[0][2][0];
mesh.geometry.faceVertexUvs[0][uvCnt][1] = mesh.geometry.faceVertexUvs[0][2][1];
mesh.geometry.faceVertexUvs[0][uvCnt][2] = mesh.geometry.faceVertexUvs[0][2][2];
mesh.geometry.faceVertexUvs[0][uvCnt+1][0] = mesh.geometry.faceVertexUvs[0][3][0];
mesh.geometry.faceVertexUvs[0][uvCnt+1][1] = mesh.geometry.faceVertexUvs[0][3][1];
mesh.geometry.faceVertexUvs[0][uvCnt+1][2] = mesh.geometry.faceVertexUvs[0][3][2];
}
Here I have a cube that has 6 colors (1 per side) and I loop through each faceVertexUv (stepping by 2 as two triangle make a plane) and reset all the Uvs to my second side which is blue. Of course you'll want to map the coordinates into an object of sorts so you can easily query the object to return and reset the cooresponding Uv's but I don't know your use case. For completness, you'll want to run mesh.geometry.uvsNeedUpdate = true; at runtime to see the updates. I hope that helps.
Is multiple render targets supported in Three.js? I mean, using one single fragment shader to write to different render targets.
I have a fragment shader that calculates several things and I need to output them separately into different textures for further manipulations.
Multiple render targets are not supported in three.js.
If you are interested in GPGPU within the framework of three.js, you can find a nice example here: http://jabtunes.com/labs/3d/gpuflocking/webgl_gpgpu_flocking3.html. Just be aware, it is using an older version of three.js.
three.js r.60
Though the above answer was correct at the time of its writing, it is no longer the case.
With the introduction of WebGL2, ThreeJS does indeed support multiple render targets.
https://threejs.org/docs/#api/en/renderers/WebGLMultipleRenderTargets
talking about the storage and loading of models and animations, which would be better for a Game Engine:
1 - Have a mesh and a bone for each model, both in the same file, each bone system with 10~15 animations. (so each model has its own animations)
2 - Have alot of meshes and a low number of bones, but the files are separated from each other and the same bone (animations too) can be used for more then one mesh, each bone set can have alot of animations. (notice that in this case, using the same boneset and the same animations will cause a loss of uniqueness).
And now, if I need to show 120~150 models in each frame (animated and skinned by the GPU), 40 of them are the same type, is better:
1 - Use a instancing system for all models in the game, even if I only need 1 model for each type.
2 - Detect wich model need instancing (if they repeat more then one time) and use a diferent render system (other shader programs), use a non-instancing for the other models.
3 - Dont use instancing because the "gain" would be very low for this number of models.
All the "models" talked here are animated models, currently I use the MD5 file with GPU skinning but without instancing, and I would know if there are better ways to do all the process of animating.
If someone know a good tutorial or can put me on the way... I dont know how I could create a interpolated skeleton and use instancing for it, let me explain..:
I can compress all the bone transformations (matrices) for all animation for all frames in a simple texture and send it to the vertex shader, then read for each vertex for each model the repective animation/frame transformation. This is ok, I can use instancing here because I will always send the same data for the same model type, but, when I need to use a interpolate skeleton, should I do this interpolation on vertex shader too? (more loads from the texture could cause some lost of performance).
I would need calculate the interpolated skeleton on the CPU too anyway, because I need it for colision...
Any solutions/ideas?
Im using directX but I think this applies to other systems
=> Now I just need an answer for the first question, the second is solved (but if anyone wants to give any other suggestions its ok)
The best example I can think of and one I have personally used is one by NVidia called Skinned Instancing. The example describes a way to render many instances of the same bone mesh. There is code and a whitepaper available too :)
Skinned Instancing by NVidia
Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.