Threejs and 3D Tiles (from Cesium) - three.js

I am currently in charge of exploring options to display large 3D geological models on a web page. They are built by the geologists with GeoModeller and exported using Cinema 4D to .DAE, or .OBJ. Once displayed, the model should be interactive and link to a database (this part is manageable from my side).
The issue: the models can be really big and I'm concerned that they could cause crashes and render slowly.
Solution considered so far: threejs + 3D Tiles (from cesium).
Questions: Is combining threejs and 3D Tiles actually doable? It is according to 3D Tiles presentation page but I am not a programmer and I have no idea how to implement it.
Is there another obvious solution to my problem?
Resources: What these 3D models look like: http://advancedgwt.com/wp-content/uploads/blog/63.jpg
What 3D Tiles does when combined with Cesium (but we don't want a globe here! ): http://cesiumjs.org/NewYork

ThreeJS has everything needed to implement a 3DTiles Viewer
Here's an implementation (by me) : https://github.com/ebeaufay/3DTilesViewer
Here's another one by NASA: https://github.com/NASA-AMMOS/3DTilesRendererJS
The viewer is not too difficult to implement but tiling and multi-leveling gigabytes of mesh data, there's a challenge. Luckily, I have code to do just that so hit me up if you're interested.

Related

Large lagging on mouse movement with SketchUp Dae model

I’ve designed a 3D model in SketchUp and I didn’t use any texture. I’m faced with an issue related with lagging on mouse move and rotate process. When I exported the model by Dae format and imported to the three js online editor (three js online editor) mouse movement is being very slow. I think it occurs fps drop. I couldn’t understand what’s problem with my model that I designed. I need your suggestions and ideas how to resolve this issue. Thanks for your support. I’ve uploaded 3D model’s image. Please take a look.
Object Count: 98.349, Vertices: 2,107.656, Triangles: 702.552
Object Count: 98.349,
The object count results in an equal number draw calls. Such a high value will degrade the performance no matter how complex the respective geometry eventually is.
I suggest you redesign the model and ensure to merge individual objects as much as possible. Also try to lower the number of vertices and faces.
Keep in mind that three.js does not automatically merge or batch render items. So it's your responsibility to optimize assets for rendering. It's best to do this right when designing the model. Or in code via methods like BufferGeometryUtils.mergeBufferGeometries() or via instanced rendering.

3D Models in Scenekit

I purchased 3D Models to use in SceneKit, but I am having trouble making the model appear like the final product shown on the sites where I buy them from. I have been purchasing .obj files and converting them in Xcode. I was able to successfully complete one model, but I have 5 others all running into the same problem.
As you can see, I would like it to look like this (picture from the site I purchased it from) Image 1
But when I move the .obj file (came with many more as well) this is where it gets confusing. The model has a lot of materials (which I don't understand as well) and when I try to add one of the textures through "Diffuse" it doesn't work at all. This is the best I got.Image 2
The textures also don't seem right, these are all of them but I don't understand even if they linked up, how it would achieve the shiny metal look? Thanks.
Image 3
The materials look like this and there are tons that are repetitive (over 100)
Image 4
Any guidance will be appreciated. Thank you!
You will need to understand how a material is applied on a 3D object. A .obj file will not have a material applied on it, but will have image files, which would then be UV mapped around the 3D object. The diffuse image that you just added to the object is in simple terms, the colour of the surface of the material.There are different components that can be applied on the 3D object, like specular, normal, occlusion, etc. Of course just applying the diffuse component was not going to give you a good enough result.
This Unity doc is what made me understand what each of these components are and what they do when applied on an object.
https://docs.unity3d.com/Manual/StandardShaderMaterialParameters.html
This is pretty much similar to what we use in SceneKit, and you should be able to pick up how the map is to be applied on your 3D model.
Basically, this is what happens when you correctly apply the maps to the 3D model:
Another thing that you might want to look into is PBR(Physically Based Rendering)
Depending on the 3D Model you purchased, maybe you would find this helpful.
https://developer.apple.com/videos/play/wwdc2016/609/
This WWDC video should give you an understanding of how PBR works.
Also, https://medium.com/#avihay/amazing-physically-based-rendering-using-the-new-ios-10-scenekit-2489e43f7021

How to convert an image in to 3d object - three.js

In my requirement, i need to show different building to user on mouse moments. ( like roaming a city ). I have no.of builidng as a photo. how to convert all those in to 3d object?
a) I can trace (draw) using illustrator so the builidg became 2d drawing, but how to convert that in to 3d object? and will three.js accept the vector drawing created out of it?
b) or can i directly use the different images to make an 3d view? if so, example i have front view and back view - how to convert them to 3d image?
here is my sample image:
any one help me to showing, make this to 3d and my camera let go around it?
I am looking for some good clue or sample works or tutorials.
a) Don't trace into a vector, it's going to be 2D if you do so. How would you roam a 2D image?
b) This approach beats the purpose of having a real 3D model instead of your the image plane, the result won't be convincing at all.
What you really need to do is the following:
1 - Like 2pha said, get your hands on some 3d modeling package such as 3dsmax or Maya (both paid), or you can use Blender, which is free and open-source. You need a 3d modeling package to recreate, or model, a 3d representation of that building (manually).
This process is very artistic, and requires good knowledge of the software. You'd be better off hiring someone who'd model the building(s) for you.
This video showcases how a 2d image is recreated in 3d:
http://www.youtube.com/watch?v=AnfVrP6L89M
2 - Once the 3d models of the building(s) are created, you need to export them to a format that three.js understands, such as .fbx or .obj
3 - Now, you'd have to import the .obj or .fbx file with your building into three.js
4 - Implement the controls and other requirements.
Cheers!
you need to model it in a 3d modelling program such as 3ds Max, then export it to use in three.js. The closest thing that I know that can convert images int a model is 123d. You really need to search google for such a vague question like this.

Creating a 3D map with 2D depth images in PROCESSING

I'm creating a 3D map with 2D depth images in Processing. I have captured the images using saveFrame(), however I am having difficulty in converting those saved frames into 3D. Is there any website or code I could look through for help? Any help would be much appreciated.
Before i'm going to go in-depth into your question, i want to mention that instead of saveFrame() you can use the standard dfx library to export 3d models instead of 2d images using Processing if you simply want to store a scene:
https://processing.org/reference/libraries/dxf/
Now back to your question. First of all what are depth images? Are those simply saveFrames from a 3D Scene in Processing (P3D) or are these special images, because depth is quite a general term. If they are 3D Scenes and you would know the coordinates of the camera and their viewangle the task gets quite easier, but it is technically impossible to create a 3D object using only 2D images without XRay. Imagine looking at a fork. Your eyes are making 2 pictures of that fork, however you have no idea what might be inscribed on the back of that fork. No matter how many pictures you might have of your 3D scene, you won't be able to convert it into 3D perfectly. That said, this is indeed a general problem in computer science and there are various methods to solve this. Wikipedia has an article on it:
http://en.wikipedia.org/wiki/3D_reconstruction_from_multiple_images
http://en.wikipedia.org/wiki/2D_to_3D_conversion
Here are a few stackoverflow topics which might help you get started:
3d model construction using multiple images from multiple points (kinect)
How to create 3D model from 2D image
Converting several 2D images into 3D model

Which concepts should be used to render road lighting from a vehicle point of view using Blender?

I have never used Blender except for quick trials when I installed in Linux, but I wonder if I can used to solve a very specific problem.
I want to render some images showing a vehicle projecting light in a road with some objects (people, posts, signs). I need a bird's eye (superior, orthogonal) view, and the view from inside the vehicle (perspective, first-person) that is the image that would be seen by the driver or rider.
My question is: "Which CONCEPTS should I look for when searching Blender tutorials, in order to:
Select and use the proper rendering algorithm;
Modeling a scene with surfaces, materials, light sources and cameras;
Adding photorealistic behavior regarding light diffusion, reflection, etc.
Sorry if that is too obvious or too basic, but I am not even sure if Blender is able to model such a thing with an acceptable degree of photorealism (not super-realistic, that is not my intention).
Also, if there is another more appropriate StackExchange site to post this quesion, please let me know.
A nice First-Person viewport would be similar to this (without contour lines):
And a nice bird's eye viewport (witout color-mapping) would be this:
Cycles is blender's newer render engine that is fully raytraced and can easily create realistic results. On the other hand the older blender internal renderer can give you more control over lights, like length and angle from source but also the ability to subtract light from areas, it also supports volumetric rendering (if you want a foggy lit area) which is being worked on for cycles. This may be a key to the results you want. As you want to have control over the area that is lit I would run a couple of tests with lights over a plane to see whether cycles or blender internal can easily give the results your after.
As for the final render you can set the camera to perspective with control over focal length or orthographic and adjust scale as well as the option of a panoramic camera to get the final image you want.
Blender includes a ruler and protractor feature, there are also a couple of addons that may help. The scene settings offer metric or imperial display of measurements within blender.
For concepts, it sounds like your final scene would be fairly simple and any basic modelling and texturing would help. Blendswap could be a good resource for free models to help get you started.
For tutorials Blender Cookie is a great site for tutorials on specific tasks and has a good introduction to blender tutorial, while Blenderguru tutorials focus more on the final image.
Blender has also had it's own stackexchange site blender.stackexchange.com for a few months now.

Resources