XNA large coordinates and float precision - xna-4.0

I have my model created in Autocad as .dwg file in large coordinates like (X = 528692.833, Y = -261.184, Z = 1.890). When the model is exported to .Fbx file, the distortions appear in the FBX Viewer and the same happens in XNA with converted .Xnb file.
What I can't understand is that, why the problem persists when the model transferred around the origin using World matrix in XNA framework. Dose the problem regard the float precision? since XNA api only works with float number. Or is there any problem with .Xnb file ?
What are the possible workarounds apart from moving the model around the origin in Autocad?
Edit:
Also I have realized that if instead of the BasicEffects a simple shader is used, then I wouldn't encounter this problem.
Below is a sample scene of the rendered model:
http://imageshack.us/photo/my-images/15/distortion.png/

Related

Rotation and translation in 3D reconstruction using 2D images

I am trying to do 3D model reconstruction using 2D images from different views. I am following this example code from Matlab to get the desired results:
Structure From Motion From Two Views.
Following are the test images taken from the camera:
Manually taken images of 1st and 2nd image with translation of 1cm:
Overlay with matched features of first and second image:
Manually taken images of 1st and 2nd image with translation of 2cm:
Overlay with matched features of first and second image:
These are the translation vectors and rotation matrices I get for each case:
1cm translation:
translation vector:[0.0245537412606279 -0.855696925927505 -0.516894461905255]
rotation matrix:
[0.999958322438693 0.00879926762261436 0.00243439415451741;
-0.00887800587357739 0.999365801035702 0.0344844418829408;
-0.00212941243132160 -0.0345046172211024 0.999402269855899]
2cm translation:
translation vector:[-0.215835469166982 -0.228607603749042 -0.949291111175908]
rotation matrix:
[0.999989695803078 -0.00104036790630347 -0.00441881457943975;
0.00149220346018613 0.994626852476622 0.103514238930121;
0.00428737874479779 -0.103519766069424 0.994618156086259]
In documentation, it says it is relative rotation and translation between the 2 images.
But I am unable to understand what these numbers mean and what is the unit of the above values.
Can anyone at least let me know in what units are we getting the translation and rotation or how to extract the rotation and translation which is in any way comparable to the real world values like cm/mm and radians/degrees respectively?
You can translate the rotation matrix into a axis-angle-representation where you get the angles in radians. This can be done using the vrrotmat2vec function or by implementing a translater yourself by following this if you don't have access to the package. The angle will then be in radians.
When it comes to translation however you wont get it in a unit that makes sense in the real world, since you don't know the scale. This is unfortunately a problem with structure from motion in general. It is impossible to know if you take a image close to something small or far away from something large.
When using structure from motion to construct a 3D model this is fortunately not a problem since you still get relative distances correctly. Therefore you will be able to capture the scene (by following the rest of the tutorial) but you wont be able to say if something is 2cm or 2km tall, unless you have something in the image that you know the real life size of.
Hope it helps :)

THREEJS implementation of IfcAxis2Placement3D and IfcObjectPlacement

I am working on a webgl viewer of IFC file now. Most IfcRepresentation objects are easy to understand, however, I am not good at coordination transformation. Are there any better expression to translate and rotate an Object3D in THREEJS as defined by IfcAxis2Placement3D? I guess it should rotate the object by Z axis then align the Z axis to a new vector, how to implement this ?
Another questions is about IfcObjectPlacement. It always requires a sub PlacementRelTo object until PlacementRelTo == null. I am a bit confused again, is it a forward transformation or backward transformation if I want to read the absolute coordinates from this placement? I mean, use a push-pop or a direct order? for example, if there are matrix like M1, M2.. Mn, then M = M1 x M2 x ... Mn or M = Mn x Mn-1 x ... x M2 x M1? I can find beautiful mesh objects in my project but the position is always wrong. Please help me.
Thanks.
Take a look at this article on matrix transformations for projection for a primer on Matrix transformations.
It's worth noting that most geometry in an IFC model will be 'implicit' rather than 'explicit' shapes. That means that extra processing needs to be performed before you can get the data into a form you could feed into typical 3D scene - and so you'll be missing a lot of shapes from your model as few will be explicitly modelled as a mesh. The long and short is that geometry in IFC is non-trivial (diagrams). That's before you start on placement/mapping/ transformation to a world coordinate system.
It's not clear if you are using a IFC toolkit to process the raw IFC STEP data. If not, I'd recommend you do as it will save a lot of work (likely years of work).
BuildingSmart maintain a list of resources to process IFC (commercial and open source)
I know the following toolkits are reasonably mature and capable of processing geometry. Some already have WebGL implementations you might be able to re-use.
http://ifcopenshell.org/ifcconvert.html & https://github.com/opensourceBIM
https://github.com/xBimTeam/XbimGeometry (disclosure - I am involved with this project)
http://www.ifcbrowser.com/

How to convert an image in to 3d object - three.js

In my requirement, i need to show different building to user on mouse moments. ( like roaming a city ). I have no.of builidng as a photo. how to convert all those in to 3d object?
a) I can trace (draw) using illustrator so the builidg became 2d drawing, but how to convert that in to 3d object? and will three.js accept the vector drawing created out of it?
b) or can i directly use the different images to make an 3d view? if so, example i have front view and back view - how to convert them to 3d image?
here is my sample image:
any one help me to showing, make this to 3d and my camera let go around it?
I am looking for some good clue or sample works or tutorials.
a) Don't trace into a vector, it's going to be 2D if you do so. How would you roam a 2D image?
b) This approach beats the purpose of having a real 3D model instead of your the image plane, the result won't be convincing at all.
What you really need to do is the following:
1 - Like 2pha said, get your hands on some 3d modeling package such as 3dsmax or Maya (both paid), or you can use Blender, which is free and open-source. You need a 3d modeling package to recreate, or model, a 3d representation of that building (manually).
This process is very artistic, and requires good knowledge of the software. You'd be better off hiring someone who'd model the building(s) for you.
This video showcases how a 2d image is recreated in 3d:
http://www.youtube.com/watch?v=AnfVrP6L89M
2 - Once the 3d models of the building(s) are created, you need to export them to a format that three.js understands, such as .fbx or .obj
3 - Now, you'd have to import the .obj or .fbx file with your building into three.js
4 - Implement the controls and other requirements.
Cheers!
you need to model it in a 3d modelling program such as 3ds Max, then export it to use in three.js. The closest thing that I know that can convert images int a model is 123d. You really need to search google for such a vague question like this.

three.js mesh with many textures

I'm currently trying to create a three.js mesh which has a large number of faces (in the thousands) and is using textures. However, my problem is that each face can have its texture changed at runtime, so potentially it's possible that every face has a different texture.
I tried preloading a materials array (for MeshFaceMaterial) with default textures and assigning each face a different materialIndex, but that generated much lag.
A bit of research led to here, which says
If number is large (e.g. each face could be potentially different), consider different solution, using attributes / textures to drive different per-face look.
I'm a bit confused about how shaders work, and in particular I'm not even sure how you would use textures with attributes. I couldn't find any examples of this online, as most texture-shader related examples I found used uniforms instead.
So my question is this: Is there an efficient way for creating a mesh with a large number of textures, changeable at runtime? If not, are there any examples for the aforementioned attributes/textures idea?
Indeed, this can be a tricky thing to implement. Now I can't speak much to GLSL (I'm learning) but what I do know is Uniforms are constants and would not change between calls, so you would likely want an attribute for your case, but I welcome being wrong here. However, I do have a far simpler suggestion.
You could use 1 texture that you can "subdivide" into all the tiny textures you need for each face. Then at runtime you can pull out the UV coordinates from the texture and apply it to the faces individually. You'll still deal with computation time, but for a thousand or so faces it should be doable. I tested with a 25k face model and it was quick changing all faces per tick.
Now the trick is navigating the faceVertexUvs 3 dimensional array. But for example a textured cube with 12 faces you could say reset all faces to equal one side like so:
for (var uvCnt = 0; uvCnt < mesh.geometry.faceVertexUvs[0].length; uvCnt+=2 ) {
mesh.geometry.faceVertexUvs[0][uvCnt][0] = mesh.geometry.faceVertexUvs[0][2][0];
mesh.geometry.faceVertexUvs[0][uvCnt][1] = mesh.geometry.faceVertexUvs[0][2][1];
mesh.geometry.faceVertexUvs[0][uvCnt][2] = mesh.geometry.faceVertexUvs[0][2][2];
mesh.geometry.faceVertexUvs[0][uvCnt+1][0] = mesh.geometry.faceVertexUvs[0][3][0];
mesh.geometry.faceVertexUvs[0][uvCnt+1][1] = mesh.geometry.faceVertexUvs[0][3][1];
mesh.geometry.faceVertexUvs[0][uvCnt+1][2] = mesh.geometry.faceVertexUvs[0][3][2];
}
Here I have a cube that has 6 colors (1 per side) and I loop through each faceVertexUv (stepping by 2 as two triangle make a plane) and reset all the Uvs to my second side which is blue. Of course you'll want to map the coordinates into an object of sorts so you can easily query the object to return and reset the cooresponding Uv's but I don't know your use case. For completness, you'll want to run mesh.geometry.uvsNeedUpdate = true; at runtime to see the updates. I hope that helps.

Get POINT CLOUD through 360 Degree Rotation and Image Processing

My Question is as below in two parts……
QUESTION (IN SHORT):
• To generate point cloud of real-world object….
• Through 360 degree rotation of it….on rotating table
• Getting 360 images… one image at each degree (1° to 360°).
• I know how to process image and getting pixel value of it.
• See one sample image below…you can see image is black and white...because I have to deal with the objects which are much shiny (glittery)…and it is DIAMOND. So I have setting up background so that shiny object (diamond) converted in to B/W object. And so I can easily scan outer edge of object (e.g. Diamond).
• And one thing to consider is I don’t using any laser… I just using one rotating table and one camera for taking image…you can see one sample project over here… but there MATLAB hides all the things…because that guy using MATLAB’s in Built functionality.
• Actually I am looking for Math routine or Algorithm or any Technique which helping me out to how getting point cloud…….using the way I have mentioned……..
MORE ELABORATION:
I need to have point-cloud of real-world object. So, I can display it in Computer Screen.
For that I am using one rotating table. I will put my object on it and I will rotate table a complete 360° degree rotation and I will take 360 images…one image at each degree (1° to 360°).
Camera which is used for taking image is well calibrated. I have given one sample image as below. I also know how to scan image and getting pixel value of it.
Also take in consideration that my images are Silhouette type…means just black and white... No color images.
But my problem is or where I am trapped down is in...
Getting Points cloud of object…….from the data which I have getting through processing of image.
One same kind of project I found over here……..
But it just using built in MATLAB functions…I am using Microsoft Visual C#.Net so I have to build the entire algorithm myself….because MATLAB hides all the things which I want to know….
Is there any master…….who know this entire thing well and getting me out of trap...!!!!
Thanks…..
I have no experience of this but If I wanted to do something like this I would have tried this:
Use a single color light source
if Possible create a lightsource which falls on a thin verticle slice of the object.
have 360 B/W Images, those Images will be images of a verticle line having variyng intensity. If you use matlab your matrix will have a/few column with sime values.
now asume a verticle line(your Axis of rotation).
5 plot or convert (imageno, rownoOfMatrix, ValueInPopulatedColumnInSameRow)... [Assuming numbering Image from 0 to 360]
under ideal conditions A lame way To get X and Y use K1 * cos imgNo * ValInCol and K1 * sin imgNo * ValInCol, and Z will be some K2 * rowNum.. K1 and K2 can be caliberated knowing actual size of object.
I mean Something like this:
http://fab.cba.mit.edu/content/processes/structured_light/
but instead of using structured light using a single verticle light
http://www.geom.uiuc.edu/~samuelp/del_project.html This link might help in triangulation...

Resources