History:
Working on Clara.io anatomy models exported from Mathematica
https://forum.clara.io/t/transparency-artifact/6421
https://clara.io/player/v2/588449a8-1370-4433-8b30-544e34dba272?wait=true
Some anomalies observed when surfaces are enclosing volumes or overlapping certain ways.
In most cases it does not
https://clara.io/player/v2/1e0c3009-dabc-4d0c-af91-8dec5b851263?wait=true
Mostly happens when the surfaces bound a volume
https://clara.io/player/v2/4d432c4e-2b0e-4c9c-9fee-18ff8517d6f8?wait=true
Problem occurs also in OPENGL and Sketchfab
https://sketchfab.com/models/d7542d0179064a208a1c952dba028376
Three.js folks told me to post here
Dara
Related
I was looking at three.js as a replacement of deck.gl in our existing WebGL software, for reasons not relevant to this question. One of the input data sources is large vector data exported from CAD systems. One scene integrates about 5 collections of linear features and areas in the scene. Each such collection is 10-50MB SVG. In deck.gl, we did a crazy but very effective hack - converted the vectors to geo coordinates and used lazy loading via deck.gl tile layer. It improved the rendering performance tremendously but required additional tweaking, because majority of the data is still in cartesian coordinates.
I haven't found a comparable lazy loading of such large vector data in three.js. There are plenty of format-specific loaders but it's still just different means of upfront loading. While vector tiles were created in geographical context, the principle of pre-rendering and pre-tiling data for lazy loading should be universally applicable? But i could not find any non-geo implementation, the less with support in three.js. Our geo-hacking was effective, but never felt correct because the model is naturally cartesian. The internets are suggesting that Cesium may be more flexible than deck.gl for our new requirements, but we would prefer to avoid the geo-hacking, not dive deeper in it.
What did i miss?
Somebody knows how it's possible to have so good quality for the 3D Objects like here http://showroom.littleworkshop.fr/? The objects are exported at this quality from 3DsMax or Blender or something similar, or the quality it's improved in threejs? As far I saw, the project was created with threejs. Any information regarding this project it will be helpful.
Thank you.
Question of quality is subjective. A better question would be "how can I create a scene using three.js with photo-realistic lighting and materials".
So here's an answer to that. There are a few points that make the example you provided look good:
1 - lighting. The example uses a mixture of direct and ambient lighting.
It is practically impossible to generate such lighting in real time in three.js (or any other 3d package for that matter) with the current state of the art on commodity hardware. So, you need to use light-maps. Light maps are pre-rendered textures of light and shadow, they can take a long time to generate, but look incredible, as demonstrated by example you mentioned. You can use Blender's Cycles renderer to generate light maps using "Bake" feature. The topic of lightmap generation is really outside of the scope of the question.
2 - Physically based materials. This is used to refer to material models that have excellent representation of real-life materials beyond "plastic". Three.js had at least 1 such maretial: StandardMaterial which is based on Metalness/Roughness/Albedo model (https://threejs.org/examples/?q=material#webgl_materials_standard)
good luck!
Turn on Antialias for better quality of rendering it works great
Also use lights as per requirement and camera view
I have a sphere with texture of earth that I generate on the fly with the canvas element from an SVG file and manipulate it.
The texture size is 16384x8192 , and less than this - it's look blurry on close zoom.
But this is a huge texture size and causing memory problems... (But it's look very good when it is working)
I think a better approach would be to split the sphere into 32 separated textures, each in size of 2048x2048
A few questions:
How can I split the sphere and assign the right textures?
Is this approach better in terms of memory and performance from a single huge texture?
Is there a better solution?
Thanks
You could subdivide a cube, and cubemap this.
Instead of having one texture per face, you would have NxN textures. 32 doesn't sound like a good number, but 24 for example does, (6x2x2).
You will still use the same amount of memory. If the shape actually needs to be spherical you can further subdivide the segments and normalize the entire shape (spherify it).
You probably cant even use such a big texture anyway.
notice the top sphere (cubemap, ignore isocube):
Typically, that's not something you'd do programmatically, but in a 3D program like Blender or 3D max. It involves some trivial mesh separation, UV mapping and material assignment. One other approach that's worth experimenting with would be to have multiple materials but only one mesh - you'd still get (somewhat) progressive loading. BUT
Are you sure you'd be better off with "chunks" loading sequentially rather than one big texture taking a huge amount of time? Sure, it'll improve a bit in terms of timeouts and caching, but the tradeoff is having big chunks of your mesh be textureless, which is noticeable and unasthetic.
There are a few approaches that would mitigate your problem. First, it's important to understand that texture loading optimization techniques - while common in game engines - aren't really part of threejs or what it's built for. You'll never get the near-seamless LODs or GPU optimization techniques that you'll get with UE4 or Unity. Furthermore webGL - while having made many strides over the past decade - is not ideal for handling vast texture sizes, not at the GPU level (since it's based on OpenGL ES, suited primarily for mobile devices) and certainly not at the caching level - we're still dealing with broswers here. You won't find a lot of webGL work done with vast textures of the dimensions you refer to.
Having said that,
A. A loader will let you do other things while your textures are loading so your user isn't staring at an 'unfinished mesh'. It lets you be pretty clever with dynamic loading times and UX design. Additionally, take a look at this gist to give you an idea for what a progressive texture loader could look like. A much more involved technique, that's JPEG specific, can be found here but I wouldn't approach it unless you're comfortable with low-level graphics programming.
B. Threejs does have a basic implementation of LOD although I haven't tinkered with it myself and am not sure it's useful for textures; that said, the basic premise to inquire into is whether you can load progressively higher-resolution files on a per-need basis, just like Google Earth does it for example.
C. This is out of the scope of your question - but I'd look into what happens under the hood in Unity's webgl export (which is based on threejs), and what kind of clever tricks are being employed there for similar purposes.
Finally, does your project have to be in webgl? For something ambitious and demanding, sometimes "proper" openGL / DX makes much more sense.
I have never used Blender except for quick trials when I installed in Linux, but I wonder if I can used to solve a very specific problem.
I want to render some images showing a vehicle projecting light in a road with some objects (people, posts, signs). I need a bird's eye (superior, orthogonal) view, and the view from inside the vehicle (perspective, first-person) that is the image that would be seen by the driver or rider.
My question is: "Which CONCEPTS should I look for when searching Blender tutorials, in order to:
Select and use the proper rendering algorithm;
Modeling a scene with surfaces, materials, light sources and cameras;
Adding photorealistic behavior regarding light diffusion, reflection, etc.
Sorry if that is too obvious or too basic, but I am not even sure if Blender is able to model such a thing with an acceptable degree of photorealism (not super-realistic, that is not my intention).
Also, if there is another more appropriate StackExchange site to post this quesion, please let me know.
A nice First-Person viewport would be similar to this (without contour lines):
And a nice bird's eye viewport (witout color-mapping) would be this:
Cycles is blender's newer render engine that is fully raytraced and can easily create realistic results. On the other hand the older blender internal renderer can give you more control over lights, like length and angle from source but also the ability to subtract light from areas, it also supports volumetric rendering (if you want a foggy lit area) which is being worked on for cycles. This may be a key to the results you want. As you want to have control over the area that is lit I would run a couple of tests with lights over a plane to see whether cycles or blender internal can easily give the results your after.
As for the final render you can set the camera to perspective with control over focal length or orthographic and adjust scale as well as the option of a panoramic camera to get the final image you want.
Blender includes a ruler and protractor feature, there are also a couple of addons that may help. The scene settings offer metric or imperial display of measurements within blender.
For concepts, it sounds like your final scene would be fairly simple and any basic modelling and texturing would help. Blendswap could be a good resource for free models to help get you started.
For tutorials Blender Cookie is a great site for tutorials on specific tasks and has a good introduction to blender tutorial, while Blenderguru tutorials focus more on the final image.
Blender has also had it's own stackexchange site blender.stackexchange.com for a few months now.
The occlusion algorithm is necessary in CAD and game industry. And they are different in the two industries I think. My questions are:
What kind of occlusion algorithms are applied respectively in the two indurstries?
and what is the difference?
I am working on CAD software development, and the occlusion algorithm we have adopted is - set the object identifier as its color (a integer) and then render the scene, at last, read the pixel to find out the visible objects. The performance is not so good, so I want to get some good ideas here. Thanks.
After read the anwsers, I want to clarify that the occlusion algorithms here means "occlusion culling" - find out visible surface or entities before send them into the pipeline.
With google, I have found a algorithm at gamasutra. Any other good ideas or findings? Thanks.
In games occlusion is done behind the scene using one of two 3D libraries: DirectX or OpenGL. To get into specifics, occlusion is done using a Z buffer. Each point has a Z component, points that are closer occlude points that are further away.
The occlusion algorithm is usually done in hardware by a dedicated 3D graphics processing chip that implements DirectX or OpenGL functions. A game program using DirectX or OpenGL will draw objects in 3D space, and have the OpenGL/DirectX library render the scene taking into account projection and occlusion.
It stuck me that most of the answers so far only discuss image-order occlusion.
I'm not entirely sure for CAD, but in games occlusion starts at a much higher level, using BSP trees, oct trees and/or portal rendering to quickly determine the objects that appear within the viewing frustum.
The term you should search on is hidden surface removal.
Realtime rendering usually takes advantage of one simple method of hidden surface removal: backface culling. Each poly will have a "surface normal" point that is pre-calculate at a set distance from the surface. By checking the angle of the surface normal with respect to the camera, you'd know that the surface is facing away, and therefore does not need to be rendered.
Here's some interactive flash-based demos and explanations.
Hardware pixel Z-Buffering is by far the simplest technique, however in high-density object scenes you can still be trying to render the same pixel multiple times, which may become a performance problem in some situations. - You certainly need to make sure that you're not mapping and texturing thousands of objects that just aren't visible.
I'm currently thinking about this issue in one of my projects, I've found this stimulated a few ideas:
http://www.cs.tau.ac.il/~dcor/Graphics/adv-slides/short-image-based-culling.pdf