Difference between vertex and point in vtk - computational-geometry

What's the main difference between a vertex and a point in VTK?
Well, I was assigning some computed points to an vtkPolyData output:
vtkPolyData* oput = vtkPolyData::SafeDownCast(out_info->Get(vtkDataObject::DATA_OBJECT()));
and I wondered whether to use the method SetVerts(vtkCellArray *v) or the method SetPoints(vtkPoints *).

In VTK datasets (i.e., classes inheriting vtkDataSet which is the simplest type of data that provides a notion of points), points are simply locations in space. Data may be stored at locations in space or on cells (e.g., triangles or tetrahedra) that represent a locus of points. Values stored on cells take on the same value at every point in the cell's locus.
Cells are defined by their corner points. In vtkPolyData, every cell is defined by a list of integer offsets into the point coordinates in a vtkPoints instance.
A vertex in VTK is a cell whose point locus is a single point.
It is possible to have points listed explicitly in a VTK dataset which are not reference by any cell (e.g., you can specify point coordinates in a vtkPoints object that are not used as corner points for any tetrahedron, triangle, or vertex cell). These points can only have point data (stored by arrays in a vtkPointData instance held by the vtkDataSet) and not cell data (stored by arrays in a vtkCellData instance held by the vtkDataSet).
So, SetPoints() lets you provide point coordinates which vtkCellArray instances then reference to define point locii of various shapes. One category of shapes is vertices (hence SetVerts()) while others include lines and polylines (SetLines()) and triangles/quads (SetPolys()).

I think that depends on what the points are supposed to be. Points are just points that can be visualized e.g. as part of a point cloud, while verts are parts of triangles that can represent a surface or volume.
Without any specifics about your intent I think we can't really tell you which to use.

Maybe the first part of this example is similar to what you need: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Picking/AreaPicking
Typically you set the points and then you need to assign also the vertex (or other kind of cells) in order to have something to visualize (you can assign them manually like in the example, or use the vtkVertexGlyphFilter)

Related

Determine whether a point is inside a cube using only distance queries

Given a 3D distance field that contains the distance to points in a grid (e.g. an ESDF or TSDF), I want to efficiently check whether a cube at an arbitrary orientation contains a point.
A straightforward approach is to perform raytracing to identify which cells are contained inside of the cube and check if any of those cells have 0 distance. This solution is unsatisfying because it throws away the distance information and is tightly coupled to the underlying ESDF via raytracing. It seems like we should be able to solve this problem more generally using the distance information and the resolution parameter.
One could imagine a more sophisticated approach which first checks the distance from the center of the cube to a point - if the value is large enough we know the cube is empty, or if it is small enough we know there is a point inside the cube. If the value is somewhere in between, we could recursively check the ambiguous regions. Because the distance function is discretized this algorithm should eventually terminate if the bookkeeping is done properly during the search.
Of course the devil is in the details and that is why I'm asking this question. What is the most efficient method to identify if there is a point inside the cube? If this is a classical problem, what is it called?

Efficiently project 2D plane onto 1D line

I have a array of [width, height, x, y] vectors, like so: [[width_1, height_1, x_1, y_1],...,[width_n, height_n, x_n, y_n]] representing a 2D plane of blocks. This vector is potentially long (n > 10k).
An example:
must be projected like:
The problem is however that the blocks are not neatly stacked, but can be in any shape and position
The criterion for which block should be project doesn't really matter. In the example I took the first (on the x-axis) largest; which seems reasonable.
What is important is that a list (vector) is maintained of which other blocks were occluded by the projected block. The blocks bear metadata which is important, so I should be able to answer the question "to what line segment was this block projected?"
So concretely how can a 2D plane be efficiently projected onto a line, in a sense "cast a shadow", in a way that maintains a method of seeing what blocks partake in a line segment (shadow)?
Edit: while the problem is rather generic, the concrete problem is that I have a document that has multiple columns and floating images for which I would like to generate a "minimap" which indicates where to find certain annotations (colors)
Assuming that the rectangles are always aligned with the axes, as in your example, I would use a sweep line approach:
Sort the rectangle tops/bottoms according to their y value. For every element, keep a reference to the full rectangle data.
Scan the list in increasing y order, maintaining a set S of rectangles representing the rectangles that contain the current y value. For every top of a rectangle r, add r to S. Similarly, for every bottom of r, remove r from S. Every time you do it, a segment is being closed and a new one is started. If you inspect S at this point, you have all the rectangles that participate in the segment, so this is the place to apply a policy for choosing the segment color.
If you need to know later what segments a rectangle belongs to, you can build a mapping between rectangles and segments lists, and update it during the scan.

OpenGL ES/real time/position of any vertex that is displayed?

I'm currently dealing with OpenGL ES (2, iOS 6)… and I have a question
i. Let be a mesh that has to be drawn. Moreover,
ii. I can ask for a rotation/translation so that the point of view changes.
So,
how can I know (in real time) the position of any vertex that is displayed?
Thank you in advance.
jgapc
It's not entirely clear what it is you are after, but if you want to know where your object is after doing a bunch of rotations and translations, then one very easy option, if you perform these changes in your program code instead of in the shader, is to simply take the entire last row or column of your transformation matrix (depends if you are using row or column major matrices) which will be the final translation of your object's center as a coordinate vector.
This last row or column is the same thing as multiplying your final transformation matrix by your object's local coordinate center vector, which is (0,0,0,1).
If you want to know where an object's vertex is, rather than the object's center, then multiply that vertex in local coordinate space by the final transformation matrix, and you will get the new coordinate where that vertex is positioned.
There are two things I'd like to point out:
Back-face culling discards triangles, not vertices.
Triangles are also clipped so that they're within the viewing frustum.
I'm curious as to why you care about what is not displayed?

Region Sets From Boundaries

I have an elevation map represented by a 2D array of floats.
There are regions of this map whose edges I have contained in a single vector which contains a list of the edge cells (identified by their x and y coordinates).
The edge cells are not aware of which region they are associated with, nor are edge cells which are contiguous within the vector necessarily adjacent to each other in the map.
I would like to be able to uniquely identify each region based on this information (the list of edge cells for the whole map, which again, may not be adjacent).
I have thought about trying to start at one edge cell and traverse the edge, but then the enclosed space may contain regions which should be excluded (a lake around an island which itself contains a lake). I've considered using some kind of bucket fill, but this would disrupt the valuable elevation data and I don't want to create a second array to store the information.
Any thoughts on an efficient way to go about it?
Richard,
This is a classical connected components labeling problem, isn't it ?
There are indeed several solutions when you are allowed to store a 'state' map, i.e. an auxiliary image where pixels can be assigned discrete values. Among these methods, you can indeed paint the edge pixels, then flood fill the enclosed regions. In this case, a single bit per pixel is enough.
If you don't want to afford extra storage for this bit, you can probably "steal" it from the floating point values. For instance if all elevations are positive, you can embezzle the sign bit for this purpose (and reset it afterwards); easily done in C by mapping a bitfield over the float.

How should I specify rectangles in a 3D scene?

When rendering 3D rectangles (i.e. rectangles in 3D space), of course, they are specified as a list of vertexes for two triangles. However, that representation contains a lot of extraneous information that gets tiresome to code multiple times. I'd like to create a "Rectangle" object that will allow me to specify its texture, size, position, and orientation in space and export the list of vertexes (and indexes), but I'm not sure of the best way to do it. Should I specify the position of the lower left corner (pre-rotation), or the center of the rectangle? How should I specify the orientation, as a vector containing rotation angles? This is such a simple and standard requirement that I'm sure people have thought about it before, but I can't find anything on this site or elsewhere on the subject. I plan to use these objects a lot, so my primary goal (apart from performance) is ease of use rather than anything to do with the internal representation. It wouldn't be hard for me to simply code the first thing I can think of, but I don't want to miss anything and make it unnecessarily difficult.
So, how should I represent a Rectangle object? Opinions are welcome, but sources would be especially helpful.
Edit: if it helps, I believe I'd primarily be using the rectangles on the faces of cubes, though not necessarily as the entire faces of those cubes.
It would probably be simplest to store the homogeneous matrix that transforms a standard, axis-aligned square into the desired location, along with a separate matrix that determines how to map the texture onto it.
For the location matrix, you can store the 4x3 matrix that doesn't affect the w-coordinate. This is only a bit redundant: it uses 12 values where a general rectangle needs 8, but on the other hand, it will be much easier to convert it back to a form usable for rendering.
Alternately, you can store a point location (edge or center depending on whatever is most convenient), and two direction vectors, describing the direction and length of each edge; you are relying on your rectangle generator to make sure the edge vectors are orthogonal. This will take 9 values, which is almost the best you can do.
For the texture mapping, you can store a 3x2 matrix that defines an affine mapping of the (u,v) coordinates onto the coordinates defined by the edges of the rectangle. You can choose a zero-based (0,1)x(0,1) mapping, or a symmetric (-1,1)x(-1,1) mapping, based on whatever is convenient for your application. In any case, this will require 6 values.
As a rectangle is just a bounded plane, what about storing it as an extension of that: a point and a normal vector (defining the centre---or perhaps one of the corners---and orientation); but add in two more components for the width and height bounds?
I think it really depends on how you intend to use the rectangle.
For example: If you have lots of rectangles, storing the three points of one of the two triangles might be best, because it you only have to calculate one more point.
If you typically center your rectangles on something the center point, width, height and rotation angles might be more appropriate.
I'd say: start with what ever seems naturally for you. Make sure your class is able to do all the necessary calculations and hide them behind accessors. Have a good suite of tests for that.
That way you can change the implementation any time. Or you can even have different rectangle implementations for different needs

Resources