In GLGE, Is it possible specify the face of a mesh that a texture should be mapped to? (WEBGL as well) - opengl-es

I'm trying to make an environment map which is in the form of a cube that has images mapped onto particular faces to give the illusion of being in the area (sorta like google's street view)
I'm trying to do It in glgehowever, with my limited experience, I only know how to map one texture to a whole mesh (Which is what I'm doing at the moment). If I were to create 6 different textures, would it possible for me to specify the faces that those textures should be loaded to?

You could generate the six faces of the cube as separate objects and use a different texture for each. Alternative is to set different texture coordinates for the different faces of the cube.
If you want ready-to-run code, three.js has a couple of skybox examples. E.g. http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html

You should look at "UV Mapping". Check this example. Roughly, UVs describe how the polygons are mapped (in x,y) on the texture.

Sounds like you want a cube map texture — it takes six separate images, and you lookup in it with a direction vector rather than (u,v) coordinates. They are the usual way to do environments. Cube map textures are available in WebGL.

Related

Is it more performant for three.js to load a mesh that's already been triangulated than a mesh using quads?

I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.

Creating Heatmap Over 3D Model From Vector 3 Point Data

I am attempting to render a flat, dynamically created heatmap on top of a 3D model that is loaded from an OBJ (or STL).
I am currently loading and rendering an OBJ with Three.js. I have vector3 points that I am currently drawing as simple red cubes (image below). These data points are all raycasted to my OBJs mesh and are lying on the surface. The vector3 points are loaded from an external data source and will change depending on what data is being viewed/collected.
I would like to render my vector3 point data into a heatmap on the surface of my OBJ. Here are some examples illustrating the type of visual effects I am trying to achieve:
I feel like vertex coloring is the method of achieving this, but my issue is that my OBJ model does not have enough tessellation to do this. As you can see many red dots fall on each face. I am struggling to find a way to draw over my object's mesh with colors exactly where my red point data is. I was assuming I would need to convert my random vector3 points into a mesh, but cannot find a method to do so.
I've looked at the possibility of generating a texture, but 1) I do not have a UV map for my OBJs and do not see a way to programmatically generate them and 2) I am a bit lost on how I would correlate vector3 point data to UV points.
I've looked at using shaders, but my vector3 point data appears to be too large for using a shader (could be hundreds of thousands of points). I also feel it is not the right approach to render the heatmap every frame and would rather only render it once on load.
I've looked into isosurfaces with point clouds and the marching cubes algorithm, but I didn't think this was the right direction since only my data is a bit like a point cloud, and I am unsure as to how I would keep this smooth along the surface of my OBJ mesh.
Although I would prefer to keep everything in JavaScript for viewing in the browser, I am open to doing server side processing in any language/program with REST so long as it can be automated without human intervention, and pushed back to the browser for rendering.
Any suggestions or guidance is appreciated.
I'm only guessing but it seems like first you need to have UV coordinates that map every triangle to a texture. Rather than do this by hand I'd suggest using a modeling package. Most modeling packages have some way of automatically and uniformly mapping every triangle to a texture. For example in Blender
Next to put the heatmap in the texture by computing which triangles are affected by each dot (your raycasting), looking up their texture coordinates, projecting that dot into texture space and then putting the colors in that part of the texture. I'm only guessing that you need to not just do exact points but probably need to consider adjacent triangles since some heat info that hits near the edge of a triangle needs to bleed over into the adjacent triangle but that adjacent triangle might be using a completely different part of the texture.

Creating multiple segments of a sphere in three.js

I want to create a series of segments of sphere which later can be combined to give a illusion of entire sphere. I want to do this so that i can use different textures.
Please follow the below link
Texturing a sphere in THREE.js
In addition to the above answer, you can accomplish this dynamically as well using different combination of the PI and theta values, so that you can decide on number of sphere to be created at run time ,based on the number of your textures.

Convert 2D planes to 3D model

We have a multiple 2D planar image of an object scanned from a fan-beam perspective. An example is in Fig 5 below. We have multiple grainy dotted planes to scan the whole image.
The issue with these images is that they cannot be directly mapped to a 3D plane due to the fan beam deformation.
Is there correction algorithms/methods that can be recommended so that these planes can be correctly mapped to 3D plane and an object can be reconstructed properly?
Depending on how you store your data, there might be various approaches. Guessing that you store the data as points ("grainy dotted planes"), you can do an interpolation of the corresponding points in the consecutive planes and thereby get the scan of the entire object. It does require the points to be in the same frame, so you might have to do some kind of transformation to find the parameters of each plane to a global framework.
Another procedure might be use of a least square fitting of each plane which can then be used to map together the object. You might find some helpful approaches of scanning 3d objects using 2d methods. Hope this helps.

How to generate one texture from N textures?

Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.
Generate texture-mapping coords for your geometry
Generate a big blank texture
For each pixel
Figure out the point on the geometry it maps to
Figure out the pixel in each image that projects onto this point
Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera
Apply your completed texture to the geometry
Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.
Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.

Resources