WebGL Heightmap from array? - three.js

Anyone happen to know of an example or can point me in the right direction on rendering a heightmap/terrain in WebGL from a three dimensional array? Basically I have an array that contains data relevant to x and y coordinates and a 'height' (z axis).
Everything I've found (like in the threejs world) shows how to create one dynamically or from a 2d image. Ideally I'd like to have the color of the pixel/particle related to the height. Basically looking to do something like below but in WebGL:

There are many examples on how to do this already available. You can search for three.js + heigthmap.
Or try three.js + 3d graph.
Here is something called a "Graphulus-Function" that looks pretty much exactly like what you need.
Here you can find another interesting reference.
Without more details on your data it is hard to say if these examples suit your needs...
Check also this three.js issue 1003 on GitHub: "Terrain from Heightmap" where there is a discussion about this topic and lots of great examples are mentioned.

Related

Is there a way to add an outline in scene kit?

I've been making a game in scene kit, but the edges of objects are difficult to see, making some of the games details impossible to see. Is there a way to make a black outline around all the game objects?
you could use an SCNTechnique as mentioned in another answer (you can have a look at this article about cel shading, which has an edge-detection pass) but full-frame post-processes are quite expensive.
On OS X you can also leverage geometry shaders (see this article). But it's not available on iOS and might be harder to implement and get right.
I would go with a much easier technique, which only involves vertex and fragment shaders. You can take a look at this article, which gives an example that's easy to re-create in SceneKit using SCNProgram or shader modifiers.
There is an example of making a glowing outline for nodes that uses SCNTechnique here:
https://github.com/laanlabs/SCNTechniqueGlow
You could modify the color and blur method to achieve an stroked outline effect.
Another SCNTechnique example, as referenced here: https://www.nurfacegames.com/everything-you-wanted-to-know-about-outline-shaders/, is to render your node slightly larger behind then again in front at regular size.
Here's a playground example of that: https://github.com/mackhowell/scenekit-outline-shader-scntechnique.

creating a translucent plastic material

I would like to create a plastic material using three.js, something like the lighter fuel container here:
http://wiki.blender.org/index.php/Doc:2.4/Tutorials/Render/Import/SolidWorks
I would be glad if I could get a reasonably simple example to start working from.
I am actually not rendering an image but visualizing a mathematical problem (cellular automata). I need a set of interlocking surfaces (something like sheets of plastic foil) with as much visual information as possible, so I can distinguish between them. Therefore I was looking for: translucency, reflections, rotating an object with a fixed light source, visible edges. Later I will add some animated color coding, but for now I need a good material.
Here is the current status of my code:
https://github.com/jeras/three.js/tree/master/pyca
Here is how this networks look for 1D CA, but I would like to handle a 2D problem:
http://rattus.info/al/files/conference.pdf
Thanks,
Iztok Jeras
Well if you are looking for some examples to start working from , you should go to this three.js tutorials site : http://stemkoski.github.io/Three.js .
There is a lot of examples and the ones you might be interested of are :
the tranlucence
the reflection
the refraction
some bubble effect
Hope this helps

Making 3D representation of an object with a webcam

Is it possible to make a 3D representation of an object by capturing many different angles using a webcam? If it is, how is it possible and how is the image-processing done?
My plan is to make a 3D representation of a person using a webcam, then from the 3D representation, i will be able to tell the person's vital statistics.
As Bart said (but did not post as an actual answer) this is entirely possible.
The research topic you are interested in is often called multi view stereo or something similar.
The basic idea resolves around using point correspondences between two (or more) images and then try to find the best matching camera positions. When the positions are found you can use stereo algorithms to back project the image points into a 3D coordinate system and form a point cloud.
From that point cloud you can then further process it to get the measurements you are looking for.
If you are completely new to the subject you have some fascinating reading to look forward to!
Bart proposed Multiple view geometry by Hartley and Zisserman, which is a very nice book indeed.
As Bart and Kigurai pointed out, this process has been studied under the title of "stereo" or "multi-view stereo" techniques. To be able to get a 3D model from a set of pictures, you need to do the following:
a) You need to know the "internal" parameters of a camera. This includes the focal length of the camera, the principal point of the image and account for radial distortion in the image.
b) You also need to know the position and orientation of each camera with respect to each other or a "world" co-ordinate system. This is called the "pose" of the camera.
There are algorithms to perform (a) and (b) which are described in Hartley and Zisserman's "Multiple View Geometry" book. Alternatively, you can use Noah Snavely's "Bundler" http://phototour.cs.washington.edu/bundler/ software to also do the same thing in a very robust manner.
Once you have the camera parameters, you essentially know how a 3D point (X,Y,Z) in the world maps to an image co-ordinate (u,v) on the photo. You also know how to map an image co-ordinate to the world. You can create a dense point cloud by searching for a match for each pixel on one photo in a photo taken from a different view-point. This requires a two-dimensional search. You can simplify this procedure by making the search 1-dimensional. This is called "rectification". You essentially take two photos and transform then so that their rows correspond to the same line in the world (simplified statement). Now you only have to search along image rows.
An algorithm for this can be also found in Hartley and Zisserman.
Finally, you need to do the matching based on some measure. There is a lot of literature out there on "stereo matching". Another word used is "disparity estimation". This is basically searching for the match of pixel (u,v) on one photo to its match (u, v') on the other photo. Once you have the match, the difference between them can be used to map back to a 3D point.
You can use Yasutaka Furukawa's "CMVS" or "PMVS2" software to do this. Or if you want to experiment by yourself, openCV is a open-source computer vision toolbox to do many of the sub-tasks required for this.
This can be done with two webcams in the same ways your eyes work. It is called stereoscopic vision.
Have a look at this:
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
An affordable alternative to get 3D data would be the Kinect camera system.
Maybe not the answer you are hoping for but Microsoft's Kinect is doing that exact thing, there are some open source drivers out there that allow you to connect it to your windows/linux box.

Tessellation in 3D

I have a set of Points in 3D space.
The image below is an example:
I would like to turn these points into a surface. I just know the X,Y and Z values of the points.
For example, check out the image below, which shows a mesh of a human face generated from points in 3D space.
i googled so much but, what i found is some images and explaination
but no one has explained with practical aspect and practical example.
is there any good or best algorithms which help me to solve this problem.
Please....
Thaks...........
You want to do a Delaunay-Triangulation. See example application here: http://www.geometrylab.de/VoroGlide/.

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources