Conversion of 2D image to 3D image - image

i am going to develope a system which will take a 2D still image as a input & 3D image as a output.
So the steps are:
1. creating a depth map from 2D image
2. creating 3D image from depth map and original image.
Can anybody suggest me the algorithms to generate the depth map of 2D image?

As far as I know, there's no 100% bullet proof algorithm that can convert a 2D image to a 3D model. Simply said, there's not enough information inside a 2D image to fully construct something 3D. Some 3D TV sets manage to do some fake 3D from the 2D input but nothing really convincing (and sometimes wrong.)
What famous softwares do (like the one in the Kinect), is use several sources instead of one single 2D image. With pictures from different angles, you can track some particular features in the images and with geometric computations output something 3D. See http://en.wikipedia.org/wiki/3D_reconstruction_from_multiple_images for full explanation.
If you're stuck with a single image, the best known tool is the human eye... Humans can easily reconstruct 3D from a picture, by unconsciously merging several factors, such as their experience of the scene, the focus blur, "far-away fog effect", etc... So the best way for you to have a result, is to do the depth map yourself in any image editing software...
Julien

Related

Simplest way to convert 2D symbols to 3D in a video stream

We need to convert some specific stream 2D video to 3D video with some symbologies on it. To make an example:
<iframe width="640" height="360" src="https://www.youtube.com/embed/-YKYjigYgok" frameborder="0" allowfullscreen></iframe>
edit: I added the video link here due to some errors in HTML insertion.
this is something similar to our project. As you can see, heights are indexed as colors, some shades, shadows are also are seen. the question is, can we convert those mountains and other shapes into 3D in a simple way? I ve seen many 2D-3D converters out in the market but they are undeterministic. We want to make our niche software for this and don't know where to start. We can utilize colors and shadows(for height and light direction) and also we have the altitude of the plane. Once we handle the mountains and other contents, putting 3D symbology is not an issue for us.
What I seek here is just some direction to get this done in a fastest way. Regards.
I think what you're looking for is called heightmap. You start from a 2d matrix with the height values in every cell and generate a 3D terrain based on the matrix.
The naive way to do it is to assign a vertex to each point in the matrix and then link them together with simple triangles.
As you can imagine is your map is large this will mean a lot of triangles. There are techniques that try to compress flat spaces or things that are very far away so that you spend the triangles on areas where they add more details. See for example quad-trees. This is also why some renderings seem non-deterministic since the algorithm is going to change the geometry on far away things in a way that it becomes visible. This can be solved by tunning the algorithms and put a larger weight on how visible the change is. A cheap-ish way of doing it is to measure the volume difference between the different levels of details, but only works decently when you don't have sharp spikes or pits in you map.
I assume assigning colors to the heights is not a problem here.

fast rasterisation and colorization of 2D polygons of known shape to an image file

The shape and positions of all the polygons are known beforehand. The polygons are not overlapping and will be of different colors and shapes, and there could be quite many of them. The polygons are defined in floating point based coordinates and will be painted on top of a JPEG photo as annotation.
How could I create the resulting image file as fast as possible after I get to know which color I should give each polygon?
If it would save time I would like to perform as much as possible of the computations beforehand. All information regarding geometry and positions of the polygons are known in advance. The JPEG photo is also known in advance. The only information not known beforehand is the color of each polygon.
The JPEG photo has a size of 250x250 pixels, so that would also be the image size of the resulting rasterised image.
The computations will be done on a Linux computer with a standard graphics card, so OpenGL might be a viable option. I know there are also rasterisation libraries like Cairo that could be used to paint polygons. What I wonder is if I could take advantage of the fact that I know so much of the input in advance and use that to speed up the computation. The only thing missing is the color of each polygon.
Preferably I would like to find a solution that would only precompute things in the form of data files. In other words as soon as the polygon colors are known, the algorithm would load the other information from datafiles (JPEG file, polygon geometry file and/or possibly precomputed datafiles). Of course it would be faster to start the computation out with a "warm" state ready in the GPU/CPU/RAM but I'd like to avoid that. The choice of programming language is not so import, but could for instance be C++.
To give some more background information: The JavaScript library OpenSeadragon that is running in a web browser requests image tiles from a web server. The idea is that measurement points (i.e. the polygons) could be plotted on-the-fly on to pregenerated Zooming Images (DZI format) by the web server. So for one image tile the algorithm would only need to be run one time. The aim is low latency.

Creating a 3D map with 2D depth images in PROCESSING

I'm creating a 3D map with 2D depth images in Processing. I have captured the images using saveFrame(), however I am having difficulty in converting those saved frames into 3D. Is there any website or code I could look through for help? Any help would be much appreciated.
Before i'm going to go in-depth into your question, i want to mention that instead of saveFrame() you can use the standard dfx library to export 3d models instead of 2d images using Processing if you simply want to store a scene:
https://processing.org/reference/libraries/dxf/
Now back to your question. First of all what are depth images? Are those simply saveFrames from a 3D Scene in Processing (P3D) or are these special images, because depth is quite a general term. If they are 3D Scenes and you would know the coordinates of the camera and their viewangle the task gets quite easier, but it is technically impossible to create a 3D object using only 2D images without XRay. Imagine looking at a fork. Your eyes are making 2 pictures of that fork, however you have no idea what might be inscribed on the back of that fork. No matter how many pictures you might have of your 3D scene, you won't be able to convert it into 3D perfectly. That said, this is indeed a general problem in computer science and there are various methods to solve this. Wikipedia has an article on it:
http://en.wikipedia.org/wiki/3D_reconstruction_from_multiple_images
http://en.wikipedia.org/wiki/2D_to_3D_conversion
Here are a few stackoverflow topics which might help you get started:
3d model construction using multiple images from multiple points (kinect)
How to create 3D model from 2D image
Converting several 2D images into 3D model

3d model construction using multiple images from multiple points (kinect)

is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.

Image recognition and 3d rendering

How hard would it be to take an image of an object (in this case of a predefined object), and develop an algorithm to cut just that object out of a photo with a background of varying complexity.
Further to this, a photo's object (say a house, car, dog - but always of one type) would need to be transformed into a 3d render. I know there are 3d rendering engines available (at a cost, free, or with some clause), but for this to work the object (subject) would need to be measured in all sorts of ways - e.g. if this is a person, we need to measure height, the curvature of the shoulder, radius of the face, length of each finger, etc.
What would the feasibility of solving this problem be? Anyone know any good links specialing in this research area? I've seen open source solutions to this problem which leaves me with the question of the ease of measuring the object while tracing around it to crop it out.
Thanks
Essentially I want to take a 2d image (typical image:which is easier than a complex photo containing multiple objects, etc.)
,
But effectively I want to turn that into a 3d image, so wouldn't what I want to do involve building a 3d rendering/modelling engine?
Furthermore, that link I have provided goes into 3ds max, with a few properties set, and a render is made.
It sounds like you want to do several things, all in the domain of computer vision.
Object Recognition (i.e. find the predefined object)
3D Reconstruction (make the 3d model from the image)
Image Segmentation (cut out just the object you are worried about from the background)
I've ranked them in order of easiest to hardest (according to my limited understanding). All together I would say it is a very complicated problem. I would look at the following Wikipedia links for more information:
Computer Vision Overview (Wikipedia)
The Eight Point Algorithm (for 3d reconstruction)
Image Segmentation
You're right this is an extremely hard set of problems, particularly that of inferring 3D information from a 2D image. Only a very limited understanding exists of how our visual system extrapolates 3D information from 2D images, one such approach is known as "Shape from Shading" and the linked google search shows how much (and consequently how little) we know.
Rob
This is a very difficult task. The hardest part is not recognising or segmenting the object from the image, but rather inferring the 3-D geometry of the object from the 2-D image. You will have more success if you can use a stereoscopic camera (or a laser scanner, if you have access to one ;).
For the case of 2-D images, try googling for "shape-from-shading". This is a method for inferring 3-D shape from a 2-D image. It does make assumptions about illumination conditions and surface properties (BRDF and geometry) that may fail in many cases, but if you are using it for only a predefined class of objects (e.g. human faces) it can work reasonably well.
Assuming it's possible, that would be extremely difficult, especially with only one image of the object. The rasterizer has to guess at the depth and distances of objects.
What you describe sounds very similar to Microsoft PhotoSynth.

Resources