Creating a 3D map with 2D depth images in PROCESSING - processing

I'm creating a 3D map with 2D depth images in Processing. I have captured the images using saveFrame(), however I am having difficulty in converting those saved frames into 3D. Is there any website or code I could look through for help? Any help would be much appreciated.

Before i'm going to go in-depth into your question, i want to mention that instead of saveFrame() you can use the standard dfx library to export 3d models instead of 2d images using Processing if you simply want to store a scene:
https://processing.org/reference/libraries/dxf/
Now back to your question. First of all what are depth images? Are those simply saveFrames from a 3D Scene in Processing (P3D) or are these special images, because depth is quite a general term. If they are 3D Scenes and you would know the coordinates of the camera and their viewangle the task gets quite easier, but it is technically impossible to create a 3D object using only 2D images without XRay. Imagine looking at a fork. Your eyes are making 2 pictures of that fork, however you have no idea what might be inscribed on the back of that fork. No matter how many pictures you might have of your 3D scene, you won't be able to convert it into 3D perfectly. That said, this is indeed a general problem in computer science and there are various methods to solve this. Wikipedia has an article on it:
http://en.wikipedia.org/wiki/3D_reconstruction_from_multiple_images
http://en.wikipedia.org/wiki/2D_to_3D_conversion
Here are a few stackoverflow topics which might help you get started:
3d model construction using multiple images from multiple points (kinect)
How to create 3D model from 2D image
Converting several 2D images into 3D model

Related

Threejs and 3D Tiles (from Cesium)

I am currently in charge of exploring options to display large 3D geological models on a web page. They are built by the geologists with GeoModeller and exported using Cinema 4D to .DAE, or .OBJ. Once displayed, the model should be interactive and link to a database (this part is manageable from my side).
The issue: the models can be really big and I'm concerned that they could cause crashes and render slowly.
Solution considered so far: threejs + 3D Tiles (from cesium).
Questions: Is combining threejs and 3D Tiles actually doable? It is according to 3D Tiles presentation page but I am not a programmer and I have no idea how to implement it.
Is there another obvious solution to my problem?
Resources: What these 3D models look like: http://advancedgwt.com/wp-content/uploads/blog/63.jpg
What 3D Tiles does when combined with Cesium (but we don't want a globe here! ): http://cesiumjs.org/NewYork
ThreeJS has everything needed to implement a 3DTiles Viewer
Here's an implementation (by me) : https://github.com/ebeaufay/3DTilesViewer
Here's another one by NASA: https://github.com/NASA-AMMOS/3DTilesRendererJS
The viewer is not too difficult to implement but tiling and multi-leveling gigabytes of mesh data, there's a challenge. Luckily, I have code to do just that so hit me up if you're interested.

How to convert an image in to 3d object - three.js

In my requirement, i need to show different building to user on mouse moments. ( like roaming a city ). I have no.of builidng as a photo. how to convert all those in to 3d object?
a) I can trace (draw) using illustrator so the builidg became 2d drawing, but how to convert that in to 3d object? and will three.js accept the vector drawing created out of it?
b) or can i directly use the different images to make an 3d view? if so, example i have front view and back view - how to convert them to 3d image?
here is my sample image:
any one help me to showing, make this to 3d and my camera let go around it?
I am looking for some good clue or sample works or tutorials.
a) Don't trace into a vector, it's going to be 2D if you do so. How would you roam a 2D image?
b) This approach beats the purpose of having a real 3D model instead of your the image plane, the result won't be convincing at all.
What you really need to do is the following:
1 - Like 2pha said, get your hands on some 3d modeling package such as 3dsmax or Maya (both paid), or you can use Blender, which is free and open-source. You need a 3d modeling package to recreate, or model, a 3d representation of that building (manually).
This process is very artistic, and requires good knowledge of the software. You'd be better off hiring someone who'd model the building(s) for you.
This video showcases how a 2d image is recreated in 3d:
http://www.youtube.com/watch?v=AnfVrP6L89M
2 - Once the 3d models of the building(s) are created, you need to export them to a format that three.js understands, such as .fbx or .obj
3 - Now, you'd have to import the .obj or .fbx file with your building into three.js
4 - Implement the controls and other requirements.
Cheers!
you need to model it in a 3d modelling program such as 3ds Max, then export it to use in three.js. The closest thing that I know that can convert images int a model is 123d. You really need to search google for such a vague question like this.

Conversion of 2D image to 3D image

i am going to develope a system which will take a 2D still image as a input & 3D image as a output.
So the steps are:
1. creating a depth map from 2D image
2. creating 3D image from depth map and original image.
Can anybody suggest me the algorithms to generate the depth map of 2D image?
As far as I know, there's no 100% bullet proof algorithm that can convert a 2D image to a 3D model. Simply said, there's not enough information inside a 2D image to fully construct something 3D. Some 3D TV sets manage to do some fake 3D from the 2D input but nothing really convincing (and sometimes wrong.)
What famous softwares do (like the one in the Kinect), is use several sources instead of one single 2D image. With pictures from different angles, you can track some particular features in the images and with geometric computations output something 3D. See http://en.wikipedia.org/wiki/3D_reconstruction_from_multiple_images for full explanation.
If you're stuck with a single image, the best known tool is the human eye... Humans can easily reconstruct 3D from a picture, by unconsciously merging several factors, such as their experience of the scene, the focus blur, "far-away fog effect", etc... So the best way for you to have a result, is to do the depth map yourself in any image editing software...
Julien

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

3d model construction using multiple images from multiple points (kinect)

is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.

Resources