How to convert an image in to 3d object - three.js - three.js

In my requirement, i need to show different building to user on mouse moments. ( like roaming a city ). I have no.of builidng as a photo. how to convert all those in to 3d object?
a) I can trace (draw) using illustrator so the builidg became 2d drawing, but how to convert that in to 3d object? and will three.js accept the vector drawing created out of it?
b) or can i directly use the different images to make an 3d view? if so, example i have front view and back view - how to convert them to 3d image?
here is my sample image:
any one help me to showing, make this to 3d and my camera let go around it?
I am looking for some good clue or sample works or tutorials.

a) Don't trace into a vector, it's going to be 2D if you do so. How would you roam a 2D image?
b) This approach beats the purpose of having a real 3D model instead of your the image plane, the result won't be convincing at all.
What you really need to do is the following:
1 - Like 2pha said, get your hands on some 3d modeling package such as 3dsmax or Maya (both paid), or you can use Blender, which is free and open-source. You need a 3d modeling package to recreate, or model, a 3d representation of that building (manually).
This process is very artistic, and requires good knowledge of the software. You'd be better off hiring someone who'd model the building(s) for you.
This video showcases how a 2d image is recreated in 3d:
http://www.youtube.com/watch?v=AnfVrP6L89M
2 - Once the 3d models of the building(s) are created, you need to export them to a format that three.js understands, such as .fbx or .obj
3 - Now, you'd have to import the .obj or .fbx file with your building into three.js
4 - Implement the controls and other requirements.
Cheers!

you need to model it in a 3d modelling program such as 3ds Max, then export it to use in three.js. The closest thing that I know that can convert images int a model is 123d. You really need to search google for such a vague question like this.

Related

3D Models in Scenekit

I purchased 3D Models to use in SceneKit, but I am having trouble making the model appear like the final product shown on the sites where I buy them from. I have been purchasing .obj files and converting them in Xcode. I was able to successfully complete one model, but I have 5 others all running into the same problem.
As you can see, I would like it to look like this (picture from the site I purchased it from) Image 1
But when I move the .obj file (came with many more as well) this is where it gets confusing. The model has a lot of materials (which I don't understand as well) and when I try to add one of the textures through "Diffuse" it doesn't work at all. This is the best I got.Image 2
The textures also don't seem right, these are all of them but I don't understand even if they linked up, how it would achieve the shiny metal look? Thanks.
Image 3
The materials look like this and there are tons that are repetitive (over 100)
Image 4
Any guidance will be appreciated. Thank you!
You will need to understand how a material is applied on a 3D object. A .obj file will not have a material applied on it, but will have image files, which would then be UV mapped around the 3D object. The diffuse image that you just added to the object is in simple terms, the colour of the surface of the material.There are different components that can be applied on the 3D object, like specular, normal, occlusion, etc. Of course just applying the diffuse component was not going to give you a good enough result.
This Unity doc is what made me understand what each of these components are and what they do when applied on an object.
https://docs.unity3d.com/Manual/StandardShaderMaterialParameters.html
This is pretty much similar to what we use in SceneKit, and you should be able to pick up how the map is to be applied on your 3D model.
Basically, this is what happens when you correctly apply the maps to the 3D model:
Another thing that you might want to look into is PBR(Physically Based Rendering)
Depending on the 3D Model you purchased, maybe you would find this helpful.
https://developer.apple.com/videos/play/wwdc2016/609/
This WWDC video should give you an understanding of how PBR works.
Also, https://medium.com/#avihay/amazing-physically-based-rendering-using-the-new-ios-10-scenekit-2489e43f7021

Threejs and 3D Tiles (from Cesium)

I am currently in charge of exploring options to display large 3D geological models on a web page. They are built by the geologists with GeoModeller and exported using Cinema 4D to .DAE, or .OBJ. Once displayed, the model should be interactive and link to a database (this part is manageable from my side).
The issue: the models can be really big and I'm concerned that they could cause crashes and render slowly.
Solution considered so far: threejs + 3D Tiles (from cesium).
Questions: Is combining threejs and 3D Tiles actually doable? It is according to 3D Tiles presentation page but I am not a programmer and I have no idea how to implement it.
Is there another obvious solution to my problem?
Resources: What these 3D models look like: http://advancedgwt.com/wp-content/uploads/blog/63.jpg
What 3D Tiles does when combined with Cesium (but we don't want a globe here! ): http://cesiumjs.org/NewYork
ThreeJS has everything needed to implement a 3DTiles Viewer
Here's an implementation (by me) : https://github.com/ebeaufay/3DTilesViewer
Here's another one by NASA: https://github.com/NASA-AMMOS/3DTilesRendererJS
The viewer is not too difficult to implement but tiling and multi-leveling gigabytes of mesh data, there's a challenge. Luckily, I have code to do just that so hit me up if you're interested.

Creating a 3D map with 2D depth images in PROCESSING

I'm creating a 3D map with 2D depth images in Processing. I have captured the images using saveFrame(), however I am having difficulty in converting those saved frames into 3D. Is there any website or code I could look through for help? Any help would be much appreciated.
Before i'm going to go in-depth into your question, i want to mention that instead of saveFrame() you can use the standard dfx library to export 3d models instead of 2d images using Processing if you simply want to store a scene:
https://processing.org/reference/libraries/dxf/
Now back to your question. First of all what are depth images? Are those simply saveFrames from a 3D Scene in Processing (P3D) or are these special images, because depth is quite a general term. If they are 3D Scenes and you would know the coordinates of the camera and their viewangle the task gets quite easier, but it is technically impossible to create a 3D object using only 2D images without XRay. Imagine looking at a fork. Your eyes are making 2 pictures of that fork, however you have no idea what might be inscribed on the back of that fork. No matter how many pictures you might have of your 3D scene, you won't be able to convert it into 3D perfectly. That said, this is indeed a general problem in computer science and there are various methods to solve this. Wikipedia has an article on it:
http://en.wikipedia.org/wiki/3D_reconstruction_from_multiple_images
http://en.wikipedia.org/wiki/2D_to_3D_conversion
Here are a few stackoverflow topics which might help you get started:
3d model construction using multiple images from multiple points (kinect)
How to create 3D model from 2D image
Converting several 2D images into 3D model

Transform a set of 2d images representing all dimensions of an object into a 3d model

Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.

Who know how to implement the 2D bone animation showed in the game?

I wonder how do they implement the bone animation in the flash game http://www.foddy.net/athletics.swf
Do you know any study materials which I can start to learn 2D bone system from? I just implemented a avatar system by compose multiple bitmaps in each frame(similar with maple story), but some guys tell me that, a bone system can save more art resources, so I want to learn some thing about that.
Try Box2D. It's a 2D physics engine that does what you want.
Here's a link: http://www.box2d.org/
Have a look at at Dragonbones to do bone animations with your images. It's easy, FREE and can export to multiple file formats. You import your images into a library, create an armiture of bones and then pose the armiture in one or more animations.

Resources