Augment reality like zookazam - algorithm

What algorithms are used for augmented reality like zookazam ?
I think it analyze image and find planes by contrast, but i don't know how.
What topics should I read before starting with app like this?

[Prologue]
This is extremly broad topic and mostly off topic in it's current state. I reedited your question but to make your question answerable within the rules/possibilities of this site
You should specify more closely what your augmented reality:
should do
adding 2D/3D objects with known mesh ...
changing light conditions
adding/removing body parts/clothes/hairs ...
a good idea is to provide some example image (sketch) of input/output of what you want to achieve.
what input it has
video,static image, 2D,stereo,3D. For pure 2D input specify what conditions/markers/illumination/LASER patterns you have to help the reconstruction.
what will be in the input image? empty room, persons, specific objects etc.
specify target platform
many algorithms are limited to memory size/bandwidth, CPU power, special HW capabilities etc so it is a good idea to add tag for your platform. The OS and language is also a good idea to add.
[How augmented reality works]
acquire input image
if you are connecting to some device like camera you need to use its driver/framework or something to obtain the image or use some common API it supports. This task is OS dependent. My favorite way on Windows is to use VFW (video for windows) API.
I would start with some static file(s) from start instead to ease up the debug and incremental building process. (you do not need to wait for camera and stuff to happen on each build). And when your App is ready for live video then switch back to camera...
reconstruct the scene into 3D mesh
if you use 3D cameras like Kinect then this step is not necessary. Otherwise you need to distinguish the object by some segmentation process usually based on the edge detections or color homogenity.
The quality of the 3D mesh depends on what you want to achieve and what is your input. For example if you want realistic shadows and lighting then you need very good mesh. If the camera is fixed in some room you can predefine the mesh manually (hard code it) and compute just the objects in view. Also the objects detection/segmentation can be done very simply by substracting the empty room image from current view image so the pixels with big difference are the objects.
you can also use planes instead of real 3D mesh as you suggested in the OP but then you can forget about more realistic quality of effects like lighting,shadows,intersections... if you assume the objects are standing straight then you can use room metrics to obtain the distance from camera. see:
selection criteria for different projections
estimate measure of photographed things
For pure 2D input you can also use the illumination to estimate the 3D mesh see:
Turn any 2D image into 3D printable sculpture with code
render
Just render the scene back to some image/video/screen... with added/removed features. If you are not changing the light conditions too much you can also use the original image and render directly to it. Shadows can be achieved by darkening the pixels ... For better results with this the illumination/shadows/spots/etc. are usually filtered out from the original image and then added directly by rendering instead. see
White balance (Color Suppression) Formula?
Enhancing dynamic range and normalizing illumination
The rendering process itself is also platform dependent (unless you are doing it by low level graphics in memory). You can use things like GDI,DX,OpenGL,... see:
Graphics rendering
You also need camera parameters for rendering like:
Transformation of 3D objects related to vanishing points and horizon line
[Basic topics to google/read]
2D
DIP digital image processing
Image Segmentation
3D
Vector math
Homogenous coordinates
3D scene reconstruction
3D graphics
normal shading
paltform dependent
image acquisition
rendering

Related

fast rasterisation and colorization of 2D polygons of known shape to an image file

The shape and positions of all the polygons are known beforehand. The polygons are not overlapping and will be of different colors and shapes, and there could be quite many of them. The polygons are defined in floating point based coordinates and will be painted on top of a JPEG photo as annotation.
How could I create the resulting image file as fast as possible after I get to know which color I should give each polygon?
If it would save time I would like to perform as much as possible of the computations beforehand. All information regarding geometry and positions of the polygons are known in advance. The JPEG photo is also known in advance. The only information not known beforehand is the color of each polygon.
The JPEG photo has a size of 250x250 pixels, so that would also be the image size of the resulting rasterised image.
The computations will be done on a Linux computer with a standard graphics card, so OpenGL might be a viable option. I know there are also rasterisation libraries like Cairo that could be used to paint polygons. What I wonder is if I could take advantage of the fact that I know so much of the input in advance and use that to speed up the computation. The only thing missing is the color of each polygon.
Preferably I would like to find a solution that would only precompute things in the form of data files. In other words as soon as the polygon colors are known, the algorithm would load the other information from datafiles (JPEG file, polygon geometry file and/or possibly precomputed datafiles). Of course it would be faster to start the computation out with a "warm" state ready in the GPU/CPU/RAM but I'd like to avoid that. The choice of programming language is not so import, but could for instance be C++.
To give some more background information: The JavaScript library OpenSeadragon that is running in a web browser requests image tiles from a web server. The idea is that measurement points (i.e. the polygons) could be plotted on-the-fly on to pregenerated Zooming Images (DZI format) by the web server. So for one image tile the algorithm would only need to be run one time. The aim is low latency.

Where to find information on 3D algorithms?

I am interested in learning about 3D video game development, but am not sure where to start really.
Instead of just making it which could be done by various game makers, I am more interested in how it is done.
Ideally, I would like to know in which format general 3D models, etc. are stored.(coordinate format etc.) and information on how to represent the 3D data on the screen from a certain perspective such as in general free roaming 3D video games like Devil May Cry.
I have seen some links regarding 3D matrices but I really don't understand how they are used. Any help for beginners would be much appreciated.
Thanks
Video game development is a huge field requiring knowledge in game theory, computer science, math, physics and art. Depending on what you want to specialize on, there are different starting points. But as this is a site for programming questions, here some insights on the programming part of it:
File formats
Assets (models, textures, sounds) are created using dedicated 3rd party tools (think of Gimp, Photoshop, Blender, 3ds Max, etc), which offer a wide range of different export formats. These formats usually have one thing in common: They are optimized for simple communication between applications.
Video games have high performance requirements and assets have to be loaded and unloaded all the time during gameplay. So the content has to be in a format that is compact and loads fast. Often 3rd party formats do not meet the specific requirements you have in your game project. For optimal performance you would want to consider developing your own format.
Examples of assets and common 3rd party formats:
Textures: PNG, JPG, BMP, TGA
3D models: OBJ, 3DS, COLLADA
Sounds: WAV, MP3
Additional examples
Textures in Direct3D
In my game project I use an importer that converts my textures from one of the aforementioned formats to DDS files. This is not a format I developed myself, still it is one of the fastest available for loading with Direct3D (Graphics API).
Static 3D models
The Wavefront OBJ file format is a very simple to understand, text-based format. Most 3D modelling applications support it. But since it is text based the files are much larger than equivalent binary files. Also they require lots of expensive parsing and processing. So I developed an importer that coverts OBJ models to my custom high performance binary format.
Wave sound files
WAV is a very common sound file format. Additionally it is quite ideal for using it in a game. So no custom format is necessary in this case.
3D graphics
Rendering a 3D scene at least 30 times per second to an average screen resolution requires quite a lot calculations. For this purpose GPUs were built. While it is possible to write any kind of program for the GPU using very low level languages, most developers use an abstraction like Direct3D or OpenGL. These APIs, while restricting the way of communicating with the GPU, greatly simplify graphics related tasks.
Rendering using an API
I have only worked with Direct3D so far, but some of this should apply to OpenGL as well.
As I said, the GPU can be programmed. Direct3D and OpenGL both come with their own GPU programming language, a.k.a. Shading Language: HLSL (Direct3D) and GLSL. A program written in one of those languages is called a Shader.
Before rendering a 3D model the graphics device has to be prepared for rendering. This is done by binding the shaders and other effect states to the device. (All of this is done using the API.)
A 3D model is usually represented as a set of vertices. For example, 4 vertices for a rectangle, 8 for a cube, etc. These vertices consist of multiple components. The absolute minimum in this cases would be a position component (3 floating point numbers representing the X, Y and Z offsets in 3D space). Also, a position is just an infinitely small point. So additionally we need to define how the points are connected to a surface.
When the vertices and triangles are defined they can be written to the memory of the GPU. If everything is correctly set, we can issue a Draw Call through the API. The GPU then executes your shaders an processes all the input data. In the last step the rendered triangles are written to the defined output (the screen, for example).
Matrices in 3D graphics
As I said before, a 3D mesh consists of vertices with a position in 3D space. This positions are all embedded in a coordinate system called object space.
In order to place the object in the world, move, rotate or scale it, these positions have to be transformed. In other words, they have to be embedded in another coordinate system, which in this case would be called world space.
The simplest and most efficient way to do this transformation is matrix multiplication: From the translation, rotation and scaling amounts a 4x4 matrix is constructed. This matrix is then multiplied with each and every vertex. (The math behind it is quite interesting, but not in the scope of this question.)
Besides object and world space there is also the view space (coordinate system of the 'camera'), clip space, screen space and tangent space (on the surface of an object). Vectors have to be transformed between those coordinate systems quite a lot. So you see, why matrices are so important in 3D graphics.
How to continue from here
Find a topic that you think is interesting and start googling. I think I gave you quite a few keywords and I hope I gave you some idea of the topics you mentioned specifically.
There is also a Game Development Site in the StackExchange framework which might be better suited for this kind of questions. The top voted questions are always a good read on any SE site.
Basically the first decision is wether to use OpenGL or DirectX.
I suggest you use OpenGL because its Platform independent and can also be used for mobile devices.
For OpenGL here are some good tutorials to get you started:
http://www.opengl-tutorial.org/

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

3d model construction using multiple images from multiple points (kinect)

is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.

Image recognition and 3d rendering

How hard would it be to take an image of an object (in this case of a predefined object), and develop an algorithm to cut just that object out of a photo with a background of varying complexity.
Further to this, a photo's object (say a house, car, dog - but always of one type) would need to be transformed into a 3d render. I know there are 3d rendering engines available (at a cost, free, or with some clause), but for this to work the object (subject) would need to be measured in all sorts of ways - e.g. if this is a person, we need to measure height, the curvature of the shoulder, radius of the face, length of each finger, etc.
What would the feasibility of solving this problem be? Anyone know any good links specialing in this research area? I've seen open source solutions to this problem which leaves me with the question of the ease of measuring the object while tracing around it to crop it out.
Thanks
Essentially I want to take a 2d image (typical image:which is easier than a complex photo containing multiple objects, etc.)
,
But effectively I want to turn that into a 3d image, so wouldn't what I want to do involve building a 3d rendering/modelling engine?
Furthermore, that link I have provided goes into 3ds max, with a few properties set, and a render is made.
It sounds like you want to do several things, all in the domain of computer vision.
Object Recognition (i.e. find the predefined object)
3D Reconstruction (make the 3d model from the image)
Image Segmentation (cut out just the object you are worried about from the background)
I've ranked them in order of easiest to hardest (according to my limited understanding). All together I would say it is a very complicated problem. I would look at the following Wikipedia links for more information:
Computer Vision Overview (Wikipedia)
The Eight Point Algorithm (for 3d reconstruction)
Image Segmentation
You're right this is an extremely hard set of problems, particularly that of inferring 3D information from a 2D image. Only a very limited understanding exists of how our visual system extrapolates 3D information from 2D images, one such approach is known as "Shape from Shading" and the linked google search shows how much (and consequently how little) we know.
Rob
This is a very difficult task. The hardest part is not recognising or segmenting the object from the image, but rather inferring the 3-D geometry of the object from the 2-D image. You will have more success if you can use a stereoscopic camera (or a laser scanner, if you have access to one ;).
For the case of 2-D images, try googling for "shape-from-shading". This is a method for inferring 3-D shape from a 2-D image. It does make assumptions about illumination conditions and surface properties (BRDF and geometry) that may fail in many cases, but if you are using it for only a predefined class of objects (e.g. human faces) it can work reasonably well.
Assuming it's possible, that would be extremely difficult, especially with only one image of the object. The rasterizer has to guess at the depth and distances of objects.
What you describe sounds very similar to Microsoft PhotoSynth.

Resources