Getting the current projection matrix - processing

How do I get and set the current projection matrix in Processing (version 4 beta, P3D renderer) ? Processing provides printProjection(); but doesn't provide a way of getting the data. Current transformation matrix can be saved, get and set using pushMatrix(), getMatrix(), and setMatrix(). Is there and equivalent (or a foolproof work around) for projection matrices ?
I searched the javadoc version of the documentation and couldn't find anything.
I attempted to see what printProjection() does in the source code.
PApplet.g ->
PGraphicsOpenGL.projection ->
PMatrix3D and
PGraphicsOpenGL.pushProjection. Both objects g and projection are public members.
Origin of this question : I have a class which sets up the camera (It calls camera() and perspective()). I have another class which needs to temporarily reset the transformation and projection matrix so that it can overlay text() on the screen. The transformation matrix can be easily pushed and poped. Is there an equivalent for the projection matrix.
I do not want to manually keep track of the calls to perspective() which sets the projection matrix. One reason is that it will become a burden on the end user and another reason is that I need the original projection matrix set by the size() function to be able to properly place text() on the screen.

Related

CGAL - Preserve attributes on vertex using corefine boolean operations

I'm relatively new to CGAL, so please pardon me if there is an obvious fix to this that I'm missing.
I was following this example on preserving attributes on facets of a polyhedral mesh after a corefine+boolean operation:
https://github.com/CGAL/cgal/blob/master/Polygon_mesh_processing/examples/Polygon_mesh_processing/corefinement_mesh_union_with_attributes.cpp
I wanted to know if it was possible to construct a Visitor struct which similarly operates on vertices of a polyhedral mesh. Ideally, I'd like to interpolate property values (a vector of doubles) from the original mesh onto the new boolean output vertices, but I can settle for imputing nearest neighbor values.
The difficulty I was running into was that the after_subface_created and after_face_copy functions overloaded within Visitor operate before the halfedge structure is set for the target face and hence, I'm not sure how to access the vertices of the target face. Is there a way to use the Visitor structure within corefinement to do this ?
In an older version of the code I used to have a visitor handling vertex creation/copy but it has not been backported (due to lack of time). What you are trying to do is a good workaround but you should use the visitor to collect the information (say fill a map [input face] -> std::vector<output face>) and process that map once the algorithm is done.

THREEJS implementation of IfcAxis2Placement3D and IfcObjectPlacement

I am working on a webgl viewer of IFC file now. Most IfcRepresentation objects are easy to understand, however, I am not good at coordination transformation. Are there any better expression to translate and rotate an Object3D in THREEJS as defined by IfcAxis2Placement3D? I guess it should rotate the object by Z axis then align the Z axis to a new vector, how to implement this ?
Another questions is about IfcObjectPlacement. It always requires a sub PlacementRelTo object until PlacementRelTo == null. I am a bit confused again, is it a forward transformation or backward transformation if I want to read the absolute coordinates from this placement? I mean, use a push-pop or a direct order? for example, if there are matrix like M1, M2.. Mn, then M = M1 x M2 x ... Mn or M = Mn x Mn-1 x ... x M2 x M1? I can find beautiful mesh objects in my project but the position is always wrong. Please help me.
Thanks.
Take a look at this article on matrix transformations for projection for a primer on Matrix transformations.
It's worth noting that most geometry in an IFC model will be 'implicit' rather than 'explicit' shapes. That means that extra processing needs to be performed before you can get the data into a form you could feed into typical 3D scene - and so you'll be missing a lot of shapes from your model as few will be explicitly modelled as a mesh. The long and short is that geometry in IFC is non-trivial (diagrams). That's before you start on placement/mapping/ transformation to a world coordinate system.
It's not clear if you are using a IFC toolkit to process the raw IFC STEP data. If not, I'd recommend you do as it will save a lot of work (likely years of work).
BuildingSmart maintain a list of resources to process IFC (commercial and open source)
I know the following toolkits are reasonably mature and capable of processing geometry. Some already have WebGL implementations you might be able to re-use.
http://ifcopenshell.org/ifcconvert.html & https://github.com/opensourceBIM
https://github.com/xBimTeam/XbimGeometry (disclosure - I am involved with this project)
http://www.ifcbrowser.com/

STEP Geometry Transformations

Lately I've been building a STEP (iso 10303-21) importer as a necessary requirement for a project I've been working on. So far, I've got the geometry right (so far as I can tell), but the orientation and position is only right on 60%-80%, which leads me to think that I'm not properly handling AXIS2_PLACEMENT_3Ds.
Right now the way that I parse the file starts at the SHAPE_REPRESENTATION_RELATIONSHIP, and process the two shape representations that it contains. For most BREP shapes, it's just a simple 'cascade' effect, until I reach the ADVANCED_FACE where all 2D (edge) data is processed, before being passed into the ELEMENTARY_SURFACE, which constructs the shape based on that data.
Currently I'm using the transformation of all of the 2D edge geometry, but Ignoring the transformation of the ELEMENTARY_SURFACE. I'm also ignoring all of the SHAPE_REPRESENTATION transformations, but using them to eventually 'get' to and use the ITEM_TRANSFORMATIONs.
I should also mention that (except for the 2D edge data), transformations are all added up, and applied in the end. To add a transformation, I convert the axes to a rotation matrix (via this question), multiply them together, and then simply add the transformations.
Update1
I've changed the way that AXIS2_PLACEMENT_3Ds are added together by removing the translation addition. Now I'm just Adding the rotations, and using the 2nd's translation, and seem to be getting oddly more accurate results.

How to implement origin/anchor point in GLKit scene graph?

I'm trying to implement a simple scene graph on iOS using GLKit but handling origin/anchor points is giving me fits. The requirements are pretty straightforward:
There is a graph of nodes each with translation, rotation, scale and
origin point.
Each node combines the properties above into a single
matrix (which is multiplied by it's parent's matrix if it has a
parent).
Nodes need to honor their parent's coordinate system,
including the origin point (i.e. barring translations, etc. a child's origin should line up with the parent's origin)
So the question is:
What operations (e.g. translationMatrix * rotationMatrix * scaleMatrix, etc.) need to be performed and in what order so as to achieve the proper handling of origin/anchor points?
P.S. - If you are kind enough to post an answer please mention whether your answer is based on column or row major matrices - that's a perennial source of confusion for me.
Have a look at both SpriteKit and SceneKit. Both APIs provide the building blocks for creating scene graphs on iOS.

Moving object Opengl Es 2.0

I am a bit confused about that I need to move my basic square .Should i use my translate matrix or just change the object vertexes. Which one is accurate ?.
I use vertex shader
gl_Position = myPMVMatrix * a_vertex;
and also i use VBO
From an accuracy point of view both methods are about equally good.
From a performance point of view, it's about minimizing bottlenecks:
For a single square you are probably not able to measure any differences, but when you think about 1 million squares (or triangles), thinks get a little more complicated:
If all of your triangles change position relative to each other, you are probably better off with changing the vbo, because you can push the data directly to the graphics card's memory, instead of having a million OpenGl calls (which are very slow).
If all your triangles stay at the same position relative to each other (like it is the case in a normal 3d-model) you should just change the transformation matrix. In this case you don't have to push the data again onto the gfx-memory, and you only have one function-call, and you are transfering only a few bytes of data to the gfx-memory.
Depending on your application it may be a good choice to devide your triangles into different categories and update them apropriately.
Don't move objects by changing all of the vertices! What about a complex model with thousands of vertices? Even if it's a simple square, don't evolve such bad practice. That's exactly what transformation matrices are for. You are already using a transformation matrix in your shader code. From the naming I assume it's a premultiplied model-view-projection matrix. So it consists of the model matrix positioning the object in world space (here's where your translation usually should go into), the view matrix positioning the world in eye/camera space (sometimes model and view matrix are combined into a single modelview matrix, like in fixed function GL) and the projection matrix doing any kind of perspective projection and/or transformation to the clipping volume, all three multiplied together as P * V * M. If there are still some questions on these transformation matrices and their use, consult some literature on 3d transformations or just your favourite OpenGL tutorial.

Resources