Autodesk forge custom geometry - three.js

I am currently facing the problem that rendering custom geometry into my forge is allocation a huge amount of memory. I am using the technique suggested on the autodesk website: https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/custom-geometry/.
I need to render up to 300 custom geometrys into my viewer but by trying to do so the site is simply crashing and my memory jumps over 5 GB. Is there any good way to render a large amount on custom geometry into the forge and keeping the performance up to a usefull level?
Thx, JT

I'm afraid this is not something Forge Viewer will be able to help with. Here's why:
The viewer contains many interesting optimizations for efficient loading and rendering of a single complex model. For example, it builds a special BVH of all the geometries in a model so that they can be traversed and rendered in an efficient way. The viewer can also handle very complex models by moving their geometry data in and out of the GPU to make room for others. But again, all this assumes that there's just one (or only a few) models in the scene. If you try and add hundreds of models into the scene instead, many of these optimizations cannot be applied anymore, and in some cases they can even make things worse (imagine that the viewer suddenly has to traverse 300 BVHs instead of a single one).
So what I would recommend would be: try and avoid situations where you would have hundreds of separate models in the scene. If possible, consider "consolidating" them into a single model. For example, if the 300 models were Inventor assemblies that you need to place at specific positions, you could:
aggregate all the assemblies using Design Automation for Inventor, and convert the result into a single Forge model, or
create a single Forge model with all 300 geometries, and then move them around during runtime using the Viewer APIs
If none of these options work for you, you could also take a look at the new format called SVF2 (https://forge.autodesk.com/blog/svf2-public-beta-new-optimized-viewer-format) which significantly reduces the memory footprint.

Related

THREE JS Imported Model Size and Performance

I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances

Is JSON model format is better for THREE.js

I am using Three.js to create a simple game, i load about 100 low poly models in obj format but the performance is not smooth, all models size not more than 18 MB, if i use JSON format will it be faster although the size will be more than double?
i tried Collada but for simple objects like my case obj is faster, if JSON is not the best solution, what is the best one?
Not any one file format is better overall, depending on your needs and requirements external software used and if it consist of animation .Personally I generally don't use json that much i use obj but json is heavily supported by three.js.. but that's more of an opinion.
There are many factors as too why your application can be heavy.
with out source code or the model files themselves I can only speculate.
Few things to consider:
Are your models optimized as best you can , 100 models in one scene is quiet allot at one time at 18mb, is this including textures?.
Are Textures compressed and reused.This will increase performance.
From shadows , lighting and animation types all have an impact, Google has plenty of resources to offer you.
There are several techniques to keep your poly count down: subdivision is a good example of this, there is a really useful article on this.
http://www.kadrmasconcepts.com/blog/2011/11/06/subdivision-surfaces-with-three-js/
Also LOD Level OF DETAIL is visible depending on how far or near an object is.
A great useful explanation here:
http://www.pheelicks.com/2014/03/rendering-large-terrains/
Three.js supports this with out any added libs..
Detail and how you render it is the key for best performance..
Even down to how you have set up your project can have a major influence.Take a look at functions and how you use them, for example on mouse move and dom element clicks can slow your three.js app dramatically if they are not optimized and used efficiently.
Reuse and share is your best option, There is no point in loading the same model twice because one is blue and the other is green...
I think that there is no better format? It really depends what you need and what not. However, for me I will go for obj!

ThreeJS: is it possible to simplify an object / reduce the number of vertexes?

I'm starting to learn ThreeJS. I have some very complex models to display.
These models come from Autocad files that my customer provides.
But sometimes the amount of details in the model is just way too much for the purpose of the website.
I would like to reduce the amount of vertexes in the model to simplify the display and enhance performance.
Is this possible from within ThreeJS? Or is there maybe an other solution for this?
There's a modifier called SimplifyModifier that works very well. You'll find it in the Three.js examples
https://threejs.org/examples/#webgl_modifier_simplifier
If you can import the model into Blender, you could try Decimate Modifier. In the latest version of Blender, it features three different methods with configurable "amount" parameters. Depending on your mesh topology, it might reduce the poly count drastically with virtually no visual changes, or it might completely break the model with even a slight reduction attempt. Other 3d packages should include a similar functionality, but I've never used those.
.
Another thing that came into mind: Sometimes when I've encountered a too high-poly Blender model, a good start has been checking if it has a Subdivision Modifier applied and removing that if so. While I do not know if there's something similar in Autocad, it might be worth investigating.
I updated SimplifyModifier function. it works with Textured models. Here is example:
https://rigmodels.com/3d_LOD.php?view=5OOOEWDBCBR630L4R438TGJCD
you can extract JS codes and use in your project.

Lightweight 3D animation driven by external data

I'm a structural engineering master student work on a seismic evaluation of a temple structure in Portugal. For the evaluation, I have created a 3D block model of the structure and will use a discrete element code to analyze the behaviour of the structure under a variety of seismic (earthquake) records. The software that I will use for the analysis has the ability to produce snapshots of the structure at regular intervals which can then be put together to make a movie of the response. However, producing the images slows down the analysis. Furthermore, since the pictures are 2D images from a specified angle, there is no possibility to rotate and view the response from other angles without re-running the model (a process that currently takes 3 days of computer time).
I am looking for an alternative method for creating a movie of the response of the structure. What I want is a very lightweight solution, where I can just bring in the block model which I have and then produce the animation by feeding in the location and the three principal axis of each block at regular intervals to produce the animation on the fly. The blocks are described as prisms with the top and bottom planes defining all of the vertices. Since the model is produced as text files, I can modify the output so that it can be read and understood by the animation code. The model is composed of about 180 blocks with 24 vertices per block (so 4320 vertices). The location and three unit vectors describing the block axis are produced by the program and I can write them out in a way that I want.
The main issue is that the quality of the animation should be decent. If the system is vector based and allows for scaling, that would be great. I would like to be able to rotate the model in real time with simple mouse dragging without too much lag or other issues.
I have very limited time (in fact I am already very behind). That is why I wanted to ask the experts here so that I don't waste my time on something that will not work in the end. I have been using Rhino and Grasshopper to generate my model but I don't think it is the right tool for this purpose. I was thinking that Processing might be able to handle this but I don't have any experience with it. Another thing that I would like to be able to do is to maybe have a 3D PDF file for distribution. But I'm not sure if this can be done with 3D PDF.
Any insight or guidance is greatly appreciated.
Don't let the name fool you, but BluffTitler DX9, a commercial software, may be what your looking for.
It's simple interface provides a fast learning curve, may quick tutorials to either watch or dissect. Depending on how fast your GPU is, real-time previews are scalable.
Reference:
Model Layer Page
User Submitted Gallery (3D models)
Jim Merry from tetra4D here. We make the 3D CAD conversion tools for Acrobat X to generate 3D PDFs. Acrobat has a 3D javascript API that enables you to manipulate objects, i.e, you could drive translations, rotations, etc of objects from your animation information after translating your model to 3D PDF. Not sure I would recommend this approach if you are in a hurry however. Also - I don't think there are any commercial 3D PDF generation tools for the formats you are using (Rhino, Grasshopper, Processing).
If you are trying to animate geometric deformations, 3D PDF won't really help you at all. You could capture the animation and encode it as flash video and embed in a PDF, but this a function of the multimedia tool in Acrobat Pro, i.e, is not specific to 3D.

Help with Cocoa: Objects as views?

in my app I want to have a light table to sort photos. Basically it's just a huge view with lots of photos in it and you can drag the photos around. Photos can overlap, they don't fall into a grid like in iPhoto.
So every photo needs to respond to mouse events. Do I make every photo into its own view? Or are views too expensive to create? I want to easily support 100+ photos or more.
Photos need to be in layers as well so I can change the stacking order. Do I use CoreAnimation for this?
I don't need finished source code just some pointers and general ideas. I will (try to) figure out the implementation myself.
Fwiw, I target 10.5+, I use Obj-C 2.0 and garbage collection.
Thanks in advance!
You should definitely use CALayer objects. Using a set of NSImageView subviews will very quickly become unmanageable performance-wise, especially if you have more than 100 images on screen. If you don't want to use Core Animation for some reason, you'd be much better off creating a single custom view and handling all the image drawing and hit testing yourself. This will be more efficient than instantiating many NSImageView objects.
However, Core Animation layers will give orders of magnitude improvement in performance over this approach, as each layer is buffered in the GPU so you can drag the layers around with virtually zero cost, and you only need to draw each image once rather than every time anything in the view changes. Core Animation will also handle layer stacking for you.
Have a look at the excellent CocoaSlides sample code which demonstrates a very similar application to what you describe, including hit testing and simple animation.
The simplest method is to use NSImageViews. You can create a subclass that can be easily dragged scaled and rotated. A more complex but visually superior option would be to use Core Animation layers (CALayer).
As long as you maintain the photo representations as distinct objects (so you can manipulate individually) they will use quite a chunk of memory, no matter how you represent them. If you provide all the data available in the photos each one could take several megs. You probably will want to actually reduce the image's display quality i.e. size in pixels, fidelity etc except when the particular photo is being worked on in detail.
Remember, you don't have to treat the photos like the physical objects they mimic. You simply have to create the illusion of physical objects in the interface. We're theater stage designers, not architects. As long as you data model model remains rigorous to the task at hand, the interface can engage in all kinds of illusions for the benefit of the user.

Resources