How can i create an elevation like this in three.js - three.js

I am building an earth mesh, but instead of a bump map, I want the surface to look like (the picture below).
Can someone please help me. What do I do?

You could check out displacement maps or height maps. They manipulate the actual geometry of the mesh, creating real bumps where normal/bump maps only "fake" them.
To create something like in that picture I believe you will need a sphere with a large amount of vertices, since the elevation in image look very detailed.
I hope this was helpful, good luck!

Related

Getting the coordinates of the object

I am new in forge. How we can get the coordinates or the threejs Vector of the object by its ID like "getBoundingBox"? e.g. I want to get the coordinates of the wall of the house.
Mesh.geometry
Geometry.computeBoundingBox()
Geometry.center()
Hope helpful.
This sample code shows how to draw a bounding box around an element. The coordinate of an element may vary according to the inser or reference point that you consider for each one.
What you need in your case is getting coordinates on the vertices rather than the bounding box, which may be quite approximative in some cases. Take a look at this article for pointers on how to get started:
Accessing mesh information with the Viewer

Google Maps-style quad-tree of materials on a single plane in Three.js – 1x1, 2x2, 4x4 and 8x8

I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH

Moving objects along predefined paths with webgl

I've read the "learning webgl" tutorial, but it does not explain everything. Something like google experiments with webgl are amazing, but I've been wondering... how do you move a 3D object along a custom path to swing into the scene or create a custom transition?
webgl -> opengl in web, so how do you do that in opengl?
what you're looking for is pretty common functionality, but it is hard to find concrete examples showing how to do it.
the easiest way i have found to do it is using Apple's J3DIMath.js webgl library.
you basically want to define a "camera" perspective matrix, then move the camera along a predefined path of vertices through your 3d space. as you move along the "track" of vertices, at each draw frame you can call the function J3DIMatrix4.lookat(), passing it the position vector along the path, the direction to look at, and the "up" direction, and it will create the appearance of a moving camera.
i hope this helps!
J3DIMath.js

360 degree photo viewer

I have a photos that is taken by 360 degree lance now does anyone know how to create 360 degree photo viewer ?
please don't send the link of already developed softwares , it would be better if someone has
the road map / example code / articles.
Preferred Technologies Could be
Java/Flash/Flex/HTML 5 / javascript
Well I haven't done it myself yet but it basically boils down to projecting the photos you have to some camera surrounding primitive.
Easiest would be a cube but this will probably result in not so good results especially at the edges and corners. Better would be a sphere on which the images are projected.
But basically, adding 3D-primitives and mapping textures on it should easily be capable with Java or Flash. If you try to program it for browsers, have a look at WebGL. This would be a more future-oriented approach that doesn't need Flash... And it already provides good methods for texture mapping on surfaces.
If by 360° you only mean the horizontal plane you could also use a cylinder, which makes it much easier than projecting on spheres. You'll just need a wide panorama photo that goes around completely and map it to the cylinder.
So basically no matter which primitive you choose you'll need to position your camera within this primitive, project the photos to the primitive and implement some controls that allow the user to rotate the camera freely.
Can you provide any example photos? This would make it easier to find a way to solve your problem and find a good way of projecting the texture...
Hope that helps... if not, keep asking...

How to generate one texture from N textures?

Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.
Generate texture-mapping coords for your geometry
Generate a big blank texture
For each pixel
Figure out the point on the geometry it maps to
Figure out the pixel in each image that projects onto this point
Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera
Apply your completed texture to the geometry
Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.
Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.

Resources