What are the bone weight, bone influences, joints, offset and local matrices used for ? There is no certain article on the internet that explains the logic well. I still don't know if every bone has a different model that is combined later with the other models. Or how to handle these matrices, how to set bones, combine and skin them... I will be glad if you can share any articles or your knowledge about skeletal animation in opengl.
Some of these terms will differ betweens implementations. Rather than try to provide concrete definitions, I'd prefer to give a very rough overview of how skinning works, as I think that's what you're asking.
Also, this is not particularly GL specific...
The main idea is that you take a polygonal model and attach it to a skeleton. Each vertex in the model is assigned to one or more bones. A bone, in this context, is really just a transformation, though they are typically visualised as a skeleton, as the bones are usually authored in a hierarchy and will naturally resemble an actual skeleton.
A 'joint' may just mean 'bone', or in other contexts may in fact refer to how two bones are connected and articulated...
As the bones are hierarchical, they will have 'local' transforms which describe their transformation relative to a parent. At runtime the transforms will usually be concatenated such that they are all in the same space.
The assigning of vertices to bones is done using weights. Weights will usually add up to 1 for each vertex. Weights can be automatically assigned by proximity to a bone, but will typically be hand-adjusted by an artist. Often they are 'painted' onto the 3D model using a tool in the art package.
At runtime, vertices are transformed by each bone they are influenced by, and the final vertex position is the weighted average of the result of those different transforms. How that weighted average is calculated can vary, but that's the general approach.
However for runtime applications, it is usually important to keep the number of different bones influencing a vertex to a minimum, and there may well be an upper limit which is relatively small; 4, perhaps. So instead of providing each vertex with a weight for every bones in the skeleton, it is common to provide a fixed number of joint indices, with corresponding weights.
Note that you generally do not do anything with texture coordinates when skinning, but you will almost certainly have to recalculate normals and possibly tangent vectors. Again, how to do that can vary between implementations.
Related
I observed some applications create a geometric structure apparently by just having a set of touch points. Like this example:
I wonder which algorithms can possibly help me to recreate such geometric structures?
UPDATE
In 3D printing, sometimes a support structure is needed:
The need for support is due to collapse of some 3D object regions, i.e. overhangs, while printing. Support structure is supposed to connect overhangs either to print floor or to 3D object itself. The geometric structure shown in the screenshot above is actually a sample support structure.
I am not a specialist in that matter and I may be missing important issues. So here is what I would naively do.
The triangles having a external normal pointing downward will reveal the overhangs. When projected vertically and merged by common edges, they define polygonal regions of the base plane. You first have to build those projected polygons, find their intersections, and order the intersections by Z. (You might also want to consider the facing polygons to take the surface thickness into account).
Now for every intersection polygon, you draw verticals to the one just below. The projections of the verticals might be sampled from a regular grid or elsehow, to tune the density. You might also consider sampling those pillars from the basement continuously to the upper surface, possibly stopping some of them earlier.
The key ingredient in this procedure is a good polygon intersection algorithm.
There is a landscape in my game, it consists of several layers(each is a separate mesh), now i want to do mesh simplification to the whole layers, but I found there is many intersections between different layers after apply simplification in the game engine, so is there a simplification algorithm to avoid intersections(or reduce the nums) or how can I avoid the intersection after simplification(any open source code repo or library will be appreciated)?
If I have understood well, the problem is produced because you are decimating each layer independently, and each decimation process has no idea of information about the others.
One idea you can try is to quantize the coordinates of your vertex after each decimation, so you don't let the coordinates to take free values, but you force to be multiple of a given value using rounding and truncation operations. This will enforce the probability of vertex of two different layers to take same values. Depending of your layers, it may be enough to quantize only one of your coordinates to avoid the sensation of overlapping between layers.
Other idea is to use a simplification algorithm based on "vertex clustering decimation", which will produce a similar effect to quantification. In this case, you can do the vertex clustering of a cloud of points with the vertex of every layer to appear together, so you ensure the vertex representator for each cluster is shared between every layer.
I have a program that visualizes triangular meshes and allows the users to draw on the meshes using a pen. I want to have a "snapping" mode in my system. The snapping mode performs drawing corrections for the user in the sense that the user-drawn lines are snapped to the nearest edge (or the silhouette) of that part of the mesh.
I'm looking for an algorithm that compute the edges visible on the mesh from a given point of view. By edges, I'm referring to the outlines of the shape: corner points and the lines between them (similar to the definition of an edge in computer vision/image processing -- such as Canny edges).
So far I've thought of two approaches for this:
Edge detection: so far I've only found this paper. Their method is understandable, yet the implementation is not trivial (due to tensor computations and some ambiguity in their explanations). The problem with this approach is that it produces "edge strength values" which is a value in the range [0, 1] for every vertex. The value of 1 indicates an edge vertex with a high confidence. This introduces extra thresholding parameters in the system which I'd rather not have. Their output looks like this (range [0, 1] scaled to [0, 65535]):
Rendering or non-photorealistic methods such as the one asked in this question or this paper. They seem to be able to create the silhouette that I'm after as can be seen below:
I'm not a graphics expert and as of yet I don't know whether their methods can be used for computation of the feature lines rather than rendering.
I was wondering if anybody has any ideas about a good algorithm for what I want to do. Since the system is very interactive, the performance is important. The snapping feature does not have to be enabled all the time (therefore, if the method is computationally expensive, some delay in when "snapping enabled" mode is toggled can be tolerated while the algorithm is computing the edges.) Also, if you know of any implementation (preferably open source), I'd be grateful if you could share it with me.
There are two types of edges that you want to detect:
silhouette edges are viewpoint dependent, they correspond to the places where the line of sight tangents the surfaces. With a triangulated model, they are easy to determine, as they are shared by a front-facing triangle and a back-facing one.
"angular" edges are viewpoint independent and formed by a discontinuity in the tangent plane direction. As a triangulated model has itself this kind of discontinuity, there is no exact criterion to find them. Just set a threshold on the angle formed by two triangles. This threshold must be such that smooth patches do not trigger.
By this approach, you will find the wanted edges in 3D.
This is not enough, as part of them are hidden by other surfaces. You have the option of integrating them as edges in the 3D model and letting the rendering engine do its job, or, if you have the courage, to implement an hidden lines removal algorithm. (The wikipedia link is a little terse.)
Since posting the question, something else came into my head. Since 2D edge detection is a very well-studied problem, one way of tackling the problem is performing 2D edge detection on the projection image of the mesh.
In other words, given a specific view of the mesh, one could generate a 2D image. A 2D edge detection algorithm (such as Canny edge detector) could then be run on the 2D image and the results can be back-projected to 3D to determine the silhouettes of the mesh in question. One possible advantage of this is simplicity!
Edit (2017):
Even though I moved away from this, I returned to this problem again for a different purpose. To anybody else looking into this problem: there is a paper that talks about various contours from meshes that's worth reading (the paper is "Suggestive Contours for Conveying Shape" by DeCarlo et al.).
Working implementation of the methods discussed in the paper are available here.
I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp
Looking for any information/algorithms relating to comparing vector graphics. E.g. say there two point collections or vector files with two almost identical figures. I want to determine that a first figure is about 90% similar to the second one.
A common way to test for similarity is with image moments. Moments are intrinsically translationally invariant, and if the objects you compare might be scaled or rotated you can use moments that are invariant to these transformations, such as Hu moments.
Most of the programs I know would require rasterized versions of the vector objects; but the moments could be calculated directly from the vector graphics using a Green's Theorem approach, or a more simplistic approach that just identifies unique (unordered) vertex configurations would be to convert the Hu moment integrals to sums over the vertices -- in a physics analogy replacing the continuous object with equal point masses at each vertex.
There is a paper on a tool called VISTO that sorts vector graphics images (using moments, I think), which should certainly be useful for more details.
You could search for fingerprint matching algorithms. Fingerprints are usually converted to a set of points with their relative location to each other, which makes it basically the same problem as yours.
You could transform it to a non-vector graphic and then apply standard image analysis techniques like SIFT points, etc.