Plane division, binding data to each segment, creating bumps - three.js

Is it possible to divide a plane in to several segments such that each segment resembles a data value, that has a scalar value. Based on which, I can create bumps.
Thanks!

Related

Way to mapping N dimensional vector to a point

I'm facing a problem with mapping, I need mapping N dimensional vectors to one group/point, like [0,1....N-1] to 1 | [1,2....N-1] to 2.
The problem is that, right now I have one function where receive a dimensional vector and the return a point, that point is the result, I want avoid call the function, I already have all results stored in a table, the problem is, I'll remove the function and now I need mapping the new entry to a existing point.
There is some way to mapping the entry to a correct point?
There is some algorithm to mapping to the correct point?
Some help or advice?
I already saw this topic, but I'm not sure whether Hilbert Curve is the solution, I need study more about it.
Mapping N-dimensional value to a point on Hilbert curve
I'll be grateful.
Mapping an n dimensional data to a one dimensional data is called projection. There are lots of ways to project an n dimensional data to a lower dimension the most well-known ones are PCA, SVD, or using radial basis functions. If you do not have your method of projection anymore, you probably can't project another point unless you have a hash-table of the previous projected points. If you happen to have exactly the same point, then you can map it to the same point. However, pay attention that the projection is not one-to-one meaning that there may exist two points that are mapped to the same point in the lower dimension. An example of such a case is the projection of 3D points on the screen on which many points may get mapped to exactly the same points on the screen. As a result, inverse projecting the points usually have ambiguities. About the link that you sent about Hilbert curve, this is a general approach to project a point in ND to a point on a space filling curve (SFC) such as Hilbert, Peano, etc. This website at MIT has interesting stuff about dimension reduction using SFC:
http://people.csail.mit.edu/jaffer/Geometry/MDSFC

Difference between vertex and point in vtk

What's the main difference between a vertex and a point in VTK?
Well, I was assigning some computed points to an vtkPolyData output:
vtkPolyData* oput = vtkPolyData::SafeDownCast(out_info->Get(vtkDataObject::DATA_OBJECT()));
and I wondered whether to use the method SetVerts(vtkCellArray *v) or the method SetPoints(vtkPoints *).
In VTK datasets (i.e., classes inheriting vtkDataSet which is the simplest type of data that provides a notion of points), points are simply locations in space. Data may be stored at locations in space or on cells (e.g., triangles or tetrahedra) that represent a locus of points. Values stored on cells take on the same value at every point in the cell's locus.
Cells are defined by their corner points. In vtkPolyData, every cell is defined by a list of integer offsets into the point coordinates in a vtkPoints instance.
A vertex in VTK is a cell whose point locus is a single point.
It is possible to have points listed explicitly in a VTK dataset which are not reference by any cell (e.g., you can specify point coordinates in a vtkPoints object that are not used as corner points for any tetrahedron, triangle, or vertex cell). These points can only have point data (stored by arrays in a vtkPointData instance held by the vtkDataSet) and not cell data (stored by arrays in a vtkCellData instance held by the vtkDataSet).
So, SetPoints() lets you provide point coordinates which vtkCellArray instances then reference to define point locii of various shapes. One category of shapes is vertices (hence SetVerts()) while others include lines and polylines (SetLines()) and triangles/quads (SetPolys()).
I think that depends on what the points are supposed to be. Points are just points that can be visualized e.g. as part of a point cloud, while verts are parts of triangles that can represent a surface or volume.
Without any specifics about your intent I think we can't really tell you which to use.
Maybe the first part of this example is similar to what you need: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Picking/AreaPicking
Typically you set the points and then you need to assign also the vertex (or other kind of cells) in order to have something to visualize (you can assign them manually like in the example, or use the vtkVertexGlyphFilter)

Efficiently project 2D plane onto 1D line

I have a array of [width, height, x, y] vectors, like so: [[width_1, height_1, x_1, y_1],...,[width_n, height_n, x_n, y_n]] representing a 2D plane of blocks. This vector is potentially long (n > 10k).
An example:
must be projected like:
The problem is however that the blocks are not neatly stacked, but can be in any shape and position
The criterion for which block should be project doesn't really matter. In the example I took the first (on the x-axis) largest; which seems reasonable.
What is important is that a list (vector) is maintained of which other blocks were occluded by the projected block. The blocks bear metadata which is important, so I should be able to answer the question "to what line segment was this block projected?"
So concretely how can a 2D plane be efficiently projected onto a line, in a sense "cast a shadow", in a way that maintains a method of seeing what blocks partake in a line segment (shadow)?
Edit: while the problem is rather generic, the concrete problem is that I have a document that has multiple columns and floating images for which I would like to generate a "minimap" which indicates where to find certain annotations (colors)
Assuming that the rectangles are always aligned with the axes, as in your example, I would use a sweep line approach:
Sort the rectangle tops/bottoms according to their y value. For every element, keep a reference to the full rectangle data.
Scan the list in increasing y order, maintaining a set S of rectangles representing the rectangles that contain the current y value. For every top of a rectangle r, add r to S. Similarly, for every bottom of r, remove r from S. Every time you do it, a segment is being closed and a new one is started. If you inspect S at this point, you have all the rectangles that participate in the segment, so this is the place to apply a policy for choosing the segment color.
If you need to know later what segments a rectangle belongs to, you can build a mapping between rectangles and segments lists, and update it during the scan.

OpenGL ES/real time/position of any vertex that is displayed?

I'm currently dealing with OpenGL ES (2, iOS 6)… and I have a question
i. Let be a mesh that has to be drawn. Moreover,
ii. I can ask for a rotation/translation so that the point of view changes.
So,
how can I know (in real time) the position of any vertex that is displayed?
Thank you in advance.
jgapc
It's not entirely clear what it is you are after, but if you want to know where your object is after doing a bunch of rotations and translations, then one very easy option, if you perform these changes in your program code instead of in the shader, is to simply take the entire last row or column of your transformation matrix (depends if you are using row or column major matrices) which will be the final translation of your object's center as a coordinate vector.
This last row or column is the same thing as multiplying your final transformation matrix by your object's local coordinate center vector, which is (0,0,0,1).
If you want to know where an object's vertex is, rather than the object's center, then multiply that vertex in local coordinate space by the final transformation matrix, and you will get the new coordinate where that vertex is positioned.
There are two things I'd like to point out:
Back-face culling discards triangles, not vertices.
Triangles are also clipped so that they're within the viewing frustum.
I'm curious as to why you care about what is not displayed?

How many normals?

if you are calculating the normals of a polygon for rendering it on WebGL, do you use a normal for every index in the index array or for every vertex on the vertex array?
In the notes here, the user is calculating them for each vertex.
Every vertex. A vertex, in the WebGL sense (which is the same as OpenGL ES and other predecessors), isn't really a point in space, but rather a combination of attributes. One of these is almost always the location (though in unusual cases you might not have that), and others are generally things like the normal vector, the colour, the texture coordinates, and so on.
The index array, by contrast, is an offset into the vertex attribute arrays. So when you specify index (say) 1 in an index array, it's shorthand for "the vertex made of combining the first location in the location buffer, the first normal in the normal buffer, the first colour in the colour buffer, and the first texture coordinate in the texture coordinate buffer".
The most counter-intuitive thing for me when learning this was separating vertices from the locations they happen to occupy. There's no reason why two vertices can't have the same location.

Resources