Loading real terrain into three.js using free map data - three.js

Has anyone got any ideas on how to load real terrain data into a three.js scene.
I would like to have a 3D model on a the actual terrain , i.e the elevations and overlayed satellite imagery .
Create scene : ok
Load and animate models : ok
Terrain and satellite imagery : ???
Thanks in advance.
Jon

Three.js has an example on how to make a terrain, so that one's covered.
Regarding the satellite imagery, you'll use that as a texture on your terrain. The only thing that is important is to get the texture coordinates right, so that may end up being tricky.
This blog post gives a good example and its code is available online, too.

If you some how have, or able to calculate, the elevation data of the points needed in grid mode.
You can use plane geometry and javascript xml Loader to load your data to the planes' geometry vertices.
Use any type of material for the plane you need and define the "map" attribute to add the image texture loaded with ImageLoader
If you have random placed elevation data you can use face3 or other type of three.js geometry and an algorithm to create a TIN (triangulated irregular network) to visualize the terrain.
Also you might want to take a look at cesium library and cesium.js documentation as about the geospatial part of the question, about the terrain loading using this three.js method and this osg.js demo.

Related

What is Buffer Geometry of three.js?

The explanation presented in the three.js documentation about the command BufferGeometry is quite hard to understand to me.
It says..
BufferGeometry is a representation of mesh, line, or point geometry.
Includes vertex positions, face indices, normals, colors, UVs, and
custom attributes within buffers, reducing the cost of passing all
this data to the GPU.
I didn't quite understand what those sentences meant.
What is the purpose of BufferGeometry? How do you visualize BufferGeometry in real life?
Thank you!
An instance of this class holds the geometry data intended for rendering.
If you want to visualize these data, you have to define a material and the type of 3D object (mesh, lines or points). The code example of the documentation page shows the respective JavaScript statements.

Modifying the length of different bodyparts in three.js

I have a model of a human body, I am able to load that in threejs with the obj loader. Now after loading the model in threejs I need to do some post-processing like
scaling the length of arm
scaling the length of leg
Is it possible to do that, how can I do that ? I know that obj file store the necessary information to create meshes(i.e. vertices and faces) moreover material information if required. Can we add any extra information to achieve this?
You want to rig your model. You need to define the skeleton, and Three.js can then use "bones" to scale, position and stretch aspects of your mesh. A simple example of a rigged model in three.js is here:
https://threejs.org/docs/#api/objects/SkinnedMesh
Rigging your model in blender is available as a tutorial here: https://www.youtube.com/watch?v=eEqB-eKcv7k&index=15&list=PLOGomoq5sDLutXOHLlESKG2j9CCnCwVqg

Convert THREE.js scene to CANNON.js world

I have a basic THREE.js scene, created in Blender, including cubes and rotated planes. Is there any way that I can automatically convert this THREE.js scene into a CANNON.js world ?
Thanks
Looking at the Three.js Blender exporter, it looks like it only exports mesh data, no information about mathematical shapes (boxes, planes, spheres etc) that Cannon.js needs to work. You could try to import your meshes directly into Cannon.js, using its Trimesh class, but this would sadly only work for collisions against spheres and planes.
What you need to feed Cannon.js is mathematical geometry data, telling it which of your triangles in your mesh that represent a box (or plane) and where its center of mass is.
A common (manual) workflow for creating 3D WebGL physics, is importing the 3D models into a WebGL-enabled game engine (like Unity, Goo Create or PlayCanvas). In the game engine you can add collider shapes to your models (boxes, planes, spheres, etc), so the physics engine can work efficiently. You can from there preview your physics simulation and export a complete WebGL experience.
Going to post another answer since there are a few new options to consider here...
I wrote a simple mesh2shape(...) helper that can convert (one object at a time) from THREE.Mesh to CANNON.Shape objects. It doesn't support certain features, such as heightmaps/terrain.
Example:
var shape = mesh2shape(object3D, {type: mesh2shape.Type.BOX})
There is a (experimental!) BLENDER_physics extension for the glTF format to include physics data with a model. You could add physics data in Blender, export to glTF, and then modify THREE.GLTFLoader to pass along the physics data to your application, helping you construct CANNON.js objects.

Basic approach to pupil constriction/dilation of eye model in OpenGL

I'm new to OpenGL-ES and looking for the best approach for creating a realistic model of an eye whose pupil can dilate and constrict so I have a plan in mind while running through tutorials.
I've made a mesh in blender that is basically a sphere with a hole (the 'pole' or central vertex is removed and a couple surrounding circle edges).
I plan to add an iris texture directly to the sphere's polys surrounding the hole.
To change pupil size, do I just need a function to reposition the vertices of the hole so the hole dilates or contracts?
I'm going to use OpenGL within an Objective-C app. I have Jeff Lamarche's Objective C export script. Is it standard to export only the mesh from blender, and add textures in code later in xcode? Or is it easier/better to setup the textures on the meshes in blender first and export the more finished product's data to xcode?
Your question is a bit old, so I'm not sure how much progress you've made, but as I've been climbing up the learning curve myself I thought I'd take a shot at answering.
If you want to animate the individual vertices of your model, I believe the method you'll want is Vertex Skinning. I can't speak much on that front as I haven't yet had reason to experiment with it, although it's a technique only available in OpenGL ES 2.0. (Which is probably where you want to start anyway, the increased flexibility over 1.1 is more than worth any additional incline to the learning curve.)
The answer to your texturing question is somewhat mixed. You'll need to actually apply the texture in OpenGL. But what Blender can do for you is determine the texture coordinates. Each vertex of your mesh will have a texture coordinate associated with it. The texture coordinate will be X, Y coordinates which map to a location on the texture image. The coordinates are in a range from 0.0 to 1.0 -- so, since your image texture is a rectangle, the texture coordinate {0, 0} maps to the bottom left corner; {1 , 1} maps to the top right corner; {0.5, 0.5} maps to the exact center of the image.
So in blender, you'd want to go ahead and texture the object with UV mappings. When you export, although your exported mesh won't contain any of the image content, it will retain the texture coordinates which map to your image content. This will allow you to apply the texture in OpenGL so that the texture is applied the same way it appeared in blender.
I've personally had some trouble getting Jeff Lamarche's script to spit out the texture coordinates, as Blender api seems to change significantly with each release. I've had more success with an .obj converter. So I've been exporting from blender to .obj, and using a command line tool to go from .obj to a C header file.
If you encounter similar problems with Lamarche's script, this post might help solve it: http://38leinad.wordpress.com/2012/05/29/blender-2-6-exporting-uv-texture-coordinates/
And this is a good resource for a .obj to .h script:
http://heikobehrens.net/2009/08/27/obj2opengl/

How to generate one texture from N textures?

Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.
Generate texture-mapping coords for your geometry
Generate a big blank texture
For each pixel
Figure out the point on the geometry it maps to
Figure out the pixel in each image that projects onto this point
Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera
Apply your completed texture to the geometry
Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.
Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.

Resources