I am working on a webgl viewer of IFC file now. Most IfcRepresentation objects are easy to understand, however, I am not good at coordination transformation. Are there any better expression to translate and rotate an Object3D in THREEJS as defined by IfcAxis2Placement3D? I guess it should rotate the object by Z axis then align the Z axis to a new vector, how to implement this ?
Another questions is about IfcObjectPlacement. It always requires a sub PlacementRelTo object until PlacementRelTo == null. I am a bit confused again, is it a forward transformation or backward transformation if I want to read the absolute coordinates from this placement? I mean, use a push-pop or a direct order? for example, if there are matrix like M1, M2.. Mn, then M = M1 x M2 x ... Mn or M = Mn x Mn-1 x ... x M2 x M1? I can find beautiful mesh objects in my project but the position is always wrong. Please help me.
Thanks.
Take a look at this article on matrix transformations for projection for a primer on Matrix transformations.
It's worth noting that most geometry in an IFC model will be 'implicit' rather than 'explicit' shapes. That means that extra processing needs to be performed before you can get the data into a form you could feed into typical 3D scene - and so you'll be missing a lot of shapes from your model as few will be explicitly modelled as a mesh. The long and short is that geometry in IFC is non-trivial (diagrams). That's before you start on placement/mapping/ transformation to a world coordinate system.
It's not clear if you are using a IFC toolkit to process the raw IFC STEP data. If not, I'd recommend you do as it will save a lot of work (likely years of work).
BuildingSmart maintain a list of resources to process IFC (commercial and open source)
I know the following toolkits are reasonably mature and capable of processing geometry. Some already have WebGL implementations you might be able to re-use.
http://ifcopenshell.org/ifcconvert.html & https://github.com/opensourceBIM
https://github.com/xBimTeam/XbimGeometry (disclosure - I am involved with this project)
http://www.ifcbrowser.com/
Related
The H3 library uses a Dymaxion orientation, which means that the hexagon grid is rotated to an unusual angle relative to the equator/meridian lines. This makes sense when modelling the Earth, as the twelve pentagons then all lie in the water, but would be unnecessary when using the library to map other spheres (like the sky or other planets). In this case it would be more intuitive and aesthetically pleasing to align the icosahedron to put a pentagon at the poles and along the meridian. I'm just trying to work out what I would need to change in the library to achieve that? It looks like I would need to recalculate the faceCenterGeo and faceCenterPoint tables in faceijk.c, but do I need to recalculate faceAxesAzRadsCII as well? I don't really understand what that latter table is...
Per this related answer, the main changes you'd need for other planets are to change the radius of the sphere (only necessary if you want to calculate distances or areas) and, as you ask, the orientation of the icosahedron. For the latter:
faceCenterGeo defines the icosahedron orientation in lat/lng points
faceCenterPoint is a table derived from faceCenterGeo that defines the center of each face as 3d coords on a unit sphere. You could create your own derivation using generateFaceCenterPoint.c
faceAxesAzRadsCII is a table derived from faceCenterGeo that defines the angle from each face center to each of its three vertices. This does not have a generation script, and TBH I don't know how it was originally generated. It's used in the core algorithms translating between grid coordinates and geo coordinates, however, so you'd definitely need to update it.
I'd strongly suggest that taking this approach is a Bad Idea:
It's a fair amount of work - not (just) the calculations, but recompiling the code, maintaining a fork, possibly writing bindings in other languages for your fork, etc.
You'd break most tests involving geo input or output, so you'd be flying blind as to whether your updated code is working as expected.
You wouldn't be able to take advantage of other projects built on H3, e.g. bindings for other languages and databases.
If you want to re-orient the geometry for H3, I'd suggest doing exactly that - apply a transform to the input geo coordinates you send to H3, and a reverse transform to the output geo coordinates you get from H3. This has a bunch of advantages over modifying the library code:
It's a lot easier
You could continue to use the maintained library
You could apply these transformations outside of the bindings, in the language of your choice
Your own code is well-separated from 3rd-party library code
There's probably a very small performance penalty to this approach, but in almost all cases that's a tiny price to pay compared to the difficulties you avoid.
I'm trying to replicate a real-world camera within Three.js, where I have the camera's distortion specified as parameters for a "plumb bob" model. In particular I have P, K, R and D as specified here:
If I understand everything correctly, I want to set up Three to do the translation from "X" on the right, to the "input image" in the top left. One approach I'm considering is making a Three.js Camera of my own, setting the projectionMatrix to ... something involving K and D. Does this make sense? What exactly would I set it to, considering K and D are specified as 1 dimensional arrays, of length 9 and 5 respectively. I'm a bit lost how to combine all the numbers :(
I notice in this answer that there are complicated things necessary to render straight lines as curved, they way they would be with certain camera distortions (like a fish eye lens). I do not need that for my purposes if that is more complicated. Simply rendering each vertex is the correct spot is sufficient.
This document shows the step by step derivation of the camera matrix (Matlab reference).
See: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
So, yes. You can calculate the matrix using this procedure and use that to map the real-world 3D point to 2D output image.
Specifically I'd ideally want images with point correspondences and a 'Gold Standard' calculated value of F and left and right epipoles. I could work with an Essential matrix and intrinsic and extrinsic camera properties too.
I know that I can construct F from two projection matrices and then generate left and right projected point coordinates from 3D actual points and apply Gaussian noise but I'd really like to work with someone else's reference data since I'm trying to test the efficacy of my code and writing more code to test the first batch of (possibly bad) code doesn't seem smart.
Thanks for any help
Regards
Dave
You should work with ground truth datasets for multi-view reconstructions. I recommend to use the Middlebury Multi-View Stereo datasets. Besides the image data in lossless format, they deliver camera parameters, such as camera pose and intrinsic camera calibration as well as the possibility to evaluate your own multi-view reconstruction system.
Perhaps, the results are not computed by "the" gold standard algorithm proposed in the book of Hartley and Zisserman but you can use it to compute the fundamental matrices you require between two views.
To compute the fundamental matrix F from two projection matrices P1 and P2 refer to the code Andrew Zisserman provides.
I'm trying to make a 3D object do a wobble effect, very much like a boss in StarFox 64 did when it teleported (see this video at 5:17 for reference). This seems like either a skewing effect, or perhaps an un-uniform scale that rotated around and was applied without rotating the object itself.
Does anyone have any idea how this might be done, or perhaps does anyone have any links to programs where I can play with the matrices directly to see how this is done?
You can use skew based on roll axis in the Euler angles coordinate system
See Euler angles
http://en.wikipedia.org/wiki/Euler_angles
Euler angles-matrix transformation ("General rotations" part of the article):
http://en.wikipedia.org/wiki/Rotation_matrix
An euler angles-matrix conversion utility in DirectX SDK
http://msdn.microsoft.com/en-us/library/microsoft.windowsmobile.directx.matrix.rotationyawpitchroll%28v=VS.85%29.aspx
And threads about skew matrices
skew matrix algorithm
http://www.quantunet.com/flash8/knowledgebase/actionscript/advanced/matrix/matrix_skew.html
I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.