How to implement origin/anchor point in GLKit scene graph? - matrix

I'm trying to implement a simple scene graph on iOS using GLKit but handling origin/anchor points is giving me fits. The requirements are pretty straightforward:
There is a graph of nodes each with translation, rotation, scale and
origin point.
Each node combines the properties above into a single
matrix (which is multiplied by it's parent's matrix if it has a
parent).
Nodes need to honor their parent's coordinate system,
including the origin point (i.e. barring translations, etc. a child's origin should line up with the parent's origin)
So the question is:
What operations (e.g. translationMatrix * rotationMatrix * scaleMatrix, etc.) need to be performed and in what order so as to achieve the proper handling of origin/anchor points?
P.S. - If you are kind enough to post an answer please mention whether your answer is based on column or row major matrices - that's a perennial source of confusion for me.

Have a look at both SpriteKit and SceneKit. Both APIs provide the building blocks for creating scene graphs on iOS.

Related

CGAL - Preserve attributes on vertex using corefine boolean operations

I'm relatively new to CGAL, so please pardon me if there is an obvious fix to this that I'm missing.
I was following this example on preserving attributes on facets of a polyhedral mesh after a corefine+boolean operation:
https://github.com/CGAL/cgal/blob/master/Polygon_mesh_processing/examples/Polygon_mesh_processing/corefinement_mesh_union_with_attributes.cpp
I wanted to know if it was possible to construct a Visitor struct which similarly operates on vertices of a polyhedral mesh. Ideally, I'd like to interpolate property values (a vector of doubles) from the original mesh onto the new boolean output vertices, but I can settle for imputing nearest neighbor values.
The difficulty I was running into was that the after_subface_created and after_face_copy functions overloaded within Visitor operate before the halfedge structure is set for the target face and hence, I'm not sure how to access the vertices of the target face. Is there a way to use the Visitor structure within corefinement to do this ?
In an older version of the code I used to have a visitor handling vertex creation/copy but it has not been backported (due to lack of time). What you are trying to do is a good workaround but you should use the visitor to collect the information (say fill a map [input face] -> std::vector<output face>) and process that map once the algorithm is done.

How to align "tracks" or modular objects in Unity ?

I'm developing a simple game, where user can place different but modular objects (for instance: tracks, road etc).
My question is: how to match and place different object when placed one near the other ?
My first approach is to create an hidden child object (a box) for each module objects, and put it in the border where is possible to place other object (see my image example), so i can use that coordinates (x,y,z) to align other object.
But i don't know if the best approach.
Thanks
Summary:
1.Define what is a "snapping point"
2.Define which is your threshold
3.Update new game object position
Little Explanation
1.
So I suppose that you need a way to define which parts of the object are the "snapping points".
Cause they can be clear in some examples, like a Cube, where the whole vertex could be snapping points, but it's hard to define that every vertex in amorphous objects.
A simple solution could be the one exposed by #PierreBaret, whic consists in define on your transform component which are the "snapping points".
The other one is the one you propouse, creating empty game objects that will act as snapping points locations on the game object.
2.After having those snaped points, when you will drop your new gameObject, you need to define a threshold, as long as you don't want that every object snaps allways to the nearest game object.
3.So you define a minimum distance between snapping points, so if your snapping point is under that threshold, you will need to update it's position, to adjust to the the snapped point.
Visual Representation:
Note: The Threshold distance is showing just ONE of the 4 current threshold checks on the 4 vertex in the square, but this dark blue circle should be repilcate 3 more times, one for each green snapping point of the red square
Of course this method seems expensive, you can make some improvements like setting a first threshold between gameobjects, and if the gameObject is inside this threshold, then check snapping threshold distance.
Hope it helps!
Approach for arbitrary objects/models and deformable models.
[A] A physical approach would consider all the surfaces of the 2 objects, and you might need to check that objects don't overlap, using dot products between surfaces. That's a bit more expensive computing, but nothing nasty. If there is no match involved here, you'll be able to add matching features (see [B]). However, that's the only way to work with non predefined models or deformable models.
Approaches for matching simple and complex models
[B] Snapping points are a good thing but it's not sufficient alone. I think you need to make an object have:
a sparse representation (eg., complex oriented sphere to a cube),
and place key snapping points,
tagged by polarity or color, and eventually orientation (that's oriented snapping points); eg., in the case of rails, you'll want rails to snap {+} with {+} and forbid {+} with {-}. In the case of a more complex object, or when you have several orientations (eg., 2 faces of a surface, but only one is candidate for an pair of objects matching) you'll need more than 2 polarities, but 3 different ones per matching candidate surface or feature therefore the colors (or any enumeration). You need 3 different colors to make sure there is a unique 3D space configuration. You create something that is called in chemistry an enantiomer.
You can also use point pair features that describes the relative
position and orientation of two oriented points, when an oriented
surface is not appropriate.
References
Some are computer vision papers or book extracts, but they expose algorithms and concepts to achieve what I developed in my answer.
Model Globally, Match Locally: Efficient and Robust 3D Object Recognition, Drost et al.
3D Models and Matching

Understanding of NurbsSurface

I want to create a NurbsSurface in OpenGL. I use a grid of control points size of 40x48. Besides I create indices in order to determine the order of vertices.
In this way I created my surface of triangles.
Just to avoid misunderstandings. I have
float[] vertices=x1,y1,z1,x2,y2,z2,x3,y3,z3....... and
float[] indices= 1,6,2,7,3,8....
Now I don't want to draw triangles. I would like to interpolate the surface points. I thought about nurbs or B-Splines.
The clue is:
In order to determine the Nurbs algorithms I have to interpolate patch by patch. In my understanding one patch is defined as for example points 1,6,2,7 or 2,7,3,8(Please open the picture).
First of all I created the vertices and indices in order to use a vertexshader.
But actually it would be enough to draw it on the old way. In this case I would determine vertices and indices as follows:
float[] vertices= v1,v2,v3... with v=x,y,z
and
float[] indices= 1,6,2,7,3,8....
In OpenGL, there is a Nurbs function ready to use. glNewNurbsRenderer. So I can render a patch easily.
Unfortunately, I fail at the point, how to stitch the patches together. I found an explanation Teapot example but (maybe I have become obsessed by this) I can't transfer the solution to my case. Can you help?
You have set of control points from which you want to draw surface.
There are two ways you can go about this
Which is described in Teapot example link you have provided.
Calculate the vertices from control points and pass then down the graphics
pipeline with GL_TRIANGLE as topology. Please remember graphics hardware
needs triangulated data in order to draw.
Follow this link which shows how to evaluate vertices from control points
http://www.glprogramming.com/red/chapter12.html
You can prepare path of your control points and use tessellation shaders to
triangulate and stitch those points.
For this you prepare set of control points as patch use GL_PATCH primitive
and pass it to tessellation control shader. In this you will specify what
tessellation level you want. Depending on that your patch will be tessellated
by another fixed function stage known as Primitive Generator.
Then your generated vertices will be pass to tessellation evaluation shader
in which you can fine tune. Here you can specify outer or inner tessellation
level which will further subdivide your patch.
I would suggest you put your VBO and IBO like you have with control points and when drawing use GL_PATCH primitive. Follow below tutorial about how to use tessellation shader to draw nurb surfaces.
Note : Second method I have suggested is kind of tricky and you will have to read lot of research papers.
I think if you dont want to go with modern pipeline then I suggest go with option 1.

Finding cross on the image

I have set of binary images, on which i need to find the cross (examples attached). I use findcontours to extract borders from the binary image. But i can't understand how can i determine is this shape (border) cross or not? Maybe opencv has some built-in methods, which could help to solve this problem. I thought to solve this problem using Machine learning, but i think there is a simpler way to do this. Thanks!
Viola-Jones object detection could be a good start. Though the main usage of the algorithm (AFAIK) is face detection, it was actually designed for any object detection, such as your cross.
The algorithm is Machine-Learning based algorithm (so, you will need a set of classified "crosses" and a set of classified "not crosses"), and you will need to identify the significant "features" (patterns) that will help the algorithm recognize crosses.
The algorithm is implemented in OpenCV as cvHaarDetectObjects()
From the original image, lets say you've extracted sets of polygons that could potentially be your cross. Assuming that all of the cross is visible, to the extent that all edges can be distinguished as having a length, you could try the following.
Reject all polygons that did not have exactly 12 vertices required to
form your polygon.
Re-order the vertices such that the shortest edge length is first.
Create a best fit perspective transformation that maps your vertices onto a cross of uniform size
Examine the residuals generated by using this transformation to project your cross back onto the uniform cross, where the residual for any given point is the distance between the projected point and the corresponding uniform point.
If all the residuals are within your defined tolerance, you've found a cross.
Note that this works primarily due to the simplicity of the geometric shape you're searching for. Your contours will also need to have noise removed for this to work, e.g. each line within the cross needs to be converted to a single simple line.
Depending on your requirements, you could try some local feature detector like SIFT or SURF. Check OpenSURF which is an interesting implementation of the latter.
after some days of struggle, i came to a conclusion that the only robust way here is to use SVM + HOG. That's all.
You could erode each blob and analyze their number of pixels is going down. No mater the rotation scaling of the crosses they should always go down with the same ratio, excepted when you're closing down on the remaining center. Again, when the blob is small enough you should expect it to be in the center of the original blob. You won't need any machine learning algorithm or training data to resolve this.

Quickest Way to create Face and Edge objects in Sketchup

I have to render a mesh of a few thousand polygons in Google Sketchup. I find that add_face tends to get slower as the number of faces in the model increases. I believe this to be due to some edge detection algorithm that Sketchup is running behind the scenes. Hopefully, there should be some way to suppress this edge detection or other processing that Sketchup is doing till all faces have been added to the model.
I found add_faces_from_mesh and fill_from_mesh to be much faster but I end up with a mesh consisting of Surface instances instead of the Face and Edge objects I am looking for.
So, what is the fastest way to generate a model consisting of Face and Edge objects in Sketchup? Is there a way to generate Edge and Face objects from a Surface object?
Update: I just read here that using Model::start_transaction and Model::commit_transaction can be used to speed things up but I found that the improvements are not very significant. Anything else I can do?
I found add_faces_from_mesh and
fill_from_mesh to be much faster but I
end up with a mesh consisting of
Surface instances instead of the Face
and Edge objects I am looking for.
Calling add_faces_from_mesh or fill_from_mesh with the smooth_flags parameter explicitly set to zero correctly constructs Face and Edge objects. Sketchup Documentation claims that smooth_flags defaults to zero... my trials show otherwise.
Just to clarify - add_faces_from_mesh and fill_from_mesh do add Edges and Faces - however, the default behaviour is to create a mesh with soft and smooth edges. When you have a set of Faces connected by Soft edges they will be treated as a surface by SketchUp and Entity Info window will say "Surface" when you select them.
However - internally it's still just a set of edges and faces - SketchUp has no Surface entity.
As for Model::start_transaction - you must specify true for the second disable_ui argument in order to see any speed gains. But as you have noticed, SU is very slow to add entities - the more there is in the entities collection you add to the slower it gets. The absolute fastest way to add entities is fill_from_mesh.

Resources