ThreeJS 3D Objects quality - three.js

Somebody knows how it's possible to have so good quality for the 3D Objects like here http://showroom.littleworkshop.fr/? The objects are exported at this quality from 3DsMax or Blender or something similar, or the quality it's improved in threejs? As far I saw, the project was created with threejs. Any information regarding this project it will be helpful.
Thank you.

Question of quality is subjective. A better question would be "how can I create a scene using three.js with photo-realistic lighting and materials".
So here's an answer to that. There are a few points that make the example you provided look good:
1 - lighting. The example uses a mixture of direct and ambient lighting.
It is practically impossible to generate such lighting in real time in three.js (or any other 3d package for that matter) with the current state of the art on commodity hardware. So, you need to use light-maps. Light maps are pre-rendered textures of light and shadow, they can take a long time to generate, but look incredible, as demonstrated by example you mentioned. You can use Blender's Cycles renderer to generate light maps using "Bake" feature. The topic of lightmap generation is really outside of the scope of the question.
2 - Physically based materials. This is used to refer to material models that have excellent representation of real-life materials beyond "plastic". Three.js had at least 1 such maretial: StandardMaterial which is based on Metalness/Roughness/Albedo model (https://threejs.org/examples/?q=material#webgl_materials_standard)
good luck!

Turn on Antialias for better quality of rendering it works great
Also use lights as per requirement and camera view

Related

THREE JS Imported Model Size and Performance

I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances

3D Models in Scenekit

I purchased 3D Models to use in SceneKit, but I am having trouble making the model appear like the final product shown on the sites where I buy them from. I have been purchasing .obj files and converting them in Xcode. I was able to successfully complete one model, but I have 5 others all running into the same problem.
As you can see, I would like it to look like this (picture from the site I purchased it from) Image 1
But when I move the .obj file (came with many more as well) this is where it gets confusing. The model has a lot of materials (which I don't understand as well) and when I try to add one of the textures through "Diffuse" it doesn't work at all. This is the best I got.Image 2
The textures also don't seem right, these are all of them but I don't understand even if they linked up, how it would achieve the shiny metal look? Thanks.
Image 3
The materials look like this and there are tons that are repetitive (over 100)
Image 4
Any guidance will be appreciated. Thank you!
You will need to understand how a material is applied on a 3D object. A .obj file will not have a material applied on it, but will have image files, which would then be UV mapped around the 3D object. The diffuse image that you just added to the object is in simple terms, the colour of the surface of the material.There are different components that can be applied on the 3D object, like specular, normal, occlusion, etc. Of course just applying the diffuse component was not going to give you a good enough result.
This Unity doc is what made me understand what each of these components are and what they do when applied on an object.
https://docs.unity3d.com/Manual/StandardShaderMaterialParameters.html
This is pretty much similar to what we use in SceneKit, and you should be able to pick up how the map is to be applied on your 3D model.
Basically, this is what happens when you correctly apply the maps to the 3D model:
Another thing that you might want to look into is PBR(Physically Based Rendering)
Depending on the 3D Model you purchased, maybe you would find this helpful.
https://developer.apple.com/videos/play/wwdc2016/609/
This WWDC video should give you an understanding of how PBR works.
Also, https://medium.com/#avihay/amazing-physically-based-rendering-using-the-new-ios-10-scenekit-2489e43f7021

Which concepts should be used to render road lighting from a vehicle point of view using Blender?

I have never used Blender except for quick trials when I installed in Linux, but I wonder if I can used to solve a very specific problem.
I want to render some images showing a vehicle projecting light in a road with some objects (people, posts, signs). I need a bird's eye (superior, orthogonal) view, and the view from inside the vehicle (perspective, first-person) that is the image that would be seen by the driver or rider.
My question is: "Which CONCEPTS should I look for when searching Blender tutorials, in order to:
Select and use the proper rendering algorithm;
Modeling a scene with surfaces, materials, light sources and cameras;
Adding photorealistic behavior regarding light diffusion, reflection, etc.
Sorry if that is too obvious or too basic, but I am not even sure if Blender is able to model such a thing with an acceptable degree of photorealism (not super-realistic, that is not my intention).
Also, if there is another more appropriate StackExchange site to post this quesion, please let me know.
A nice First-Person viewport would be similar to this (without contour lines):
And a nice bird's eye viewport (witout color-mapping) would be this:
Cycles is blender's newer render engine that is fully raytraced and can easily create realistic results. On the other hand the older blender internal renderer can give you more control over lights, like length and angle from source but also the ability to subtract light from areas, it also supports volumetric rendering (if you want a foggy lit area) which is being worked on for cycles. This may be a key to the results you want. As you want to have control over the area that is lit I would run a couple of tests with lights over a plane to see whether cycles or blender internal can easily give the results your after.
As for the final render you can set the camera to perspective with control over focal length or orthographic and adjust scale as well as the option of a panoramic camera to get the final image you want.
Blender includes a ruler and protractor feature, there are also a couple of addons that may help. The scene settings offer metric or imperial display of measurements within blender.
For concepts, it sounds like your final scene would be fairly simple and any basic modelling and texturing would help. Blendswap could be a good resource for free models to help get you started.
For tutorials Blender Cookie is a great site for tutorials on specific tasks and has a good introduction to blender tutorial, while Blenderguru tutorials focus more on the final image.
Blender has also had it's own stackexchange site blender.stackexchange.com for a few months now.

Is it possible to import/process advanced light attributes (IES) in WebGL?

IES (Illuminating Engineering Society) is a file format (.ies) that enhances lights in animation tools. It adds accurate falloff, dispersion, color temperature, spatial emission, brightness and stuff like that. It's an industrial standard to show how lighting products really look like. Many animation tools (Maya, Cinema4D, Blender etc.) are able to utilize this format.
Yet, I'm still searching for a way to import/use IES in WebGL frameworks. Using an animation tool (in my case Blender) to import and process .ies-files and finally export the project to a webgl-format seemed to be the most promising method to me. I tried out a couple of WebGL-Frameworks (three.js, x3dom, coppercube) and many more export formats/import-/converter scripts but none of it produced satisfying results. The processed results showed no lights, default lights or at best the same number of lights with no further attributes.
Does anybody by any chance know of a working combination of animation tool -> export function -> WebGL-framework or WebGL-ies-format-import that would do the trick?
Are there people with the same problem and longing for a solution?
You could maybe transfer some attributes from .ies files (color, intensity and such), but the Three.js renderer simply doesn't support the complex light properties .ies (like the shape of light) is meant to describe. So, even if you were able to import/export those properties, the default Three.js light system would not be able to render them properly.
Even if you implemented your own shaders and lights (you can do that in Three.js), it probably would still be prohibitively slow and/or inaccurate, as you probably need some raytracing/pathtracing -based approach for good enough results. WebGL approach to rendering in general, no matter what library or framework you use, is not very good fit for complex, accurate simulation of lights and shadows.
That being said, I would be highly interested in any solution, even a crude approximate support would be useful.
Three.js now has an example of area lights. It is a very crude approximation (analytic evaluation of the area integral after some serious simplification). There is an article by Insomniac Games in GPU Pro 5 that offers are more accurate solution.
This only helps you with the area and shape of the light; you are on your own with the other attributes.
P.S. It's hard to tell from your question, but if your light format contains spectrum there have recently been some nice articles on real-time spectral lighting:
http://www.numb3r23.net/2013/03/26/spectral-rendering-on-the-gpu-now-with-bumps/

Overall strategy for storing animated meshes

I've been trying to figure out how you'd take a mesh generated in a program like 3ds max and bring that into your game with animations, textures, etc.
I've looked at FBX and Collada, but from what I've read, they're used as an intermediate step between the modelling software and some final format that may be custom to the game. What I'm looking for is a book or tutorial that would go over in a general way what you would store in your custom file, how you would store animation data, etc.
Right now I don't really have a general plan of attack and all of the guides I've seen stick to rendering a few triangles.
It doesn't have to be implementation specific to OpenGL, although that is what I'll be using.
Yes Collada is an interchange format.
What that means is it is very much generic. And if I am right that is exactly what you are looking for!
You can use a library such as Assimp to load collada into a generic scene graph, and then have your game/renderer use it directly, or preprocess and then consume it.

Resources