i develop a game with Three.js which loads obj models from server depending on a stream from server sent event, all models are very simple and very low poly, the code runs well but with about 50 models loaded the event source makes it very very slow, i make the event source code inside a web worker which increased the performance but still not acceptable at all.
what should i do to enhance the performance?
I would suggest you to take a look at JSON 3D exporter project. It basically converts 3D objects into plain JSON files, it becomes really lightweight and so usefull in order to improve the time response. In order to do so, you will need Blender.
https://github.com/mrdoob/three.js/tree/master/utils/exporters/blender
Related
I've seen 3D viewer services like SketchFab, Pix4D, DroneDeploy, etc. be able to send large 3D models over a network with really quick rendering times. For example, I download the model from Pix4D directly and I see that the file is roughly 70mb which takes a considerable amount of time to serve over a network.
However, when I visit sites like SketchFab, Pix4D, etc. they only take a few seconds. It looks like they are optimizing these files somehow without losing any of the vertices (accuracy) of these models.
Any thoughts on how to serve large assets in 3D applications?
they don't download those formats. they download custom formats designed for perf and download speed.
for example
https://github.com/google/draco/
I haven't looked into the details but glTF also claims to be a designed to be a display format (a format optimized for displaying in real time) where as other formats like .obj, .dae, .fbx, .3ds, .mb, etc are all either editing formats or formats designed to change data between editors, not for displaying.
question 1:
In My Application I used JsonLoader to load my model about 65MB with .js format, it's takes 10 second . it's too long for us. does any way to load big models? or any better loader or better format?
question 2:
it's about threeJs,In my case, i uesed remove() function to remove model from scene,but the cache doesn't release immediately in browser ,it's takes more than 20 second to be free; how could i do ? this is my code: this.scene.remove(i); i is my model.
JSON is a very heavy format. Try using OBJ or glTF. THREE.js has loaders for each of them in its examples.
Regarding memory release, this is inherent to JavaScript, which uses garbage collection to release memory. (Here's an MDN article on JavaScript memory management.) Just like Java, this happens "once in a while," so you just have to wait for it to happen.
Q1: OpenCTM is also a good compression format, but is, however, limited to single triangle meshes. So, you can't store whole scenes with it. But it is also possible to choose a lossy compression which results in a very high compression rate. There are also examples for three.js: https://threejs.org/examples/#webgl_loader_ctm
Q2:
If you want to really remove your model, you need to call dispose() on the geometry object to remove the object from memory (materials and textures also have a dispose method).
Docs are saying: https://threejs.org/docs/index.html#api/core/BufferGeometry
.dispose ()
Disposes the object from memory.
You need to call this when you want the bufferGeometry removed while the application is running.
I'm generating a form from JSON data that I load via $http.get(), hence use bunch of custom/3rd party directives (ui-select, Bootstrap UI, ...) to get the desired end result. Just to make things more interesting, forms are nested, and with ng-repeat, things still feel pretty sloppy, especially on mobile. Form is quite lengthy and I've split it in several sections, so putting ng-if and displaying one section at a time, as well as using bindonce does improve performance a bit, but not to the extent that I find suitable from UX POV.
The catch 22 is that underlying JSON data is unlikely to change, so ideally I'd like to go with the sloppy version in development, but in production I'd like to build/compile the form, and make it load faster.
I know that 3rd party libraries (namely ui-select) introduce bottleneck, but apart from using $templateCache with $compile in app.run() section, or rendering the form with templating engine such as ejs, what other tweaks should I take into consideration in order to improve performance?
U can also go for caching of data using IndexedDb or Local Storage to cache the JSON Data which can result to load form more faster.
My app shows a map where locations (or Markers) are dynamically loaded via an ajax (and database) request after every map Bounds changes.
I'm convinced that this solution is not scalable : at the moment, Europe area shows a total of 10 markers.
If the database grows and I display for instance 1000 locations, that means 1000 rows would be returned to the user.
This is not a JS / UI since I use the MarkerCluster plugin and I avoid the redraw of loaded locations's markers.
I made some tweaks :
- Delay the Ajax request thanks to an Idle gmaps event
- Increase the minimal zoom level, so the entire world can't be displayed.
But this is not enough.
There are lots of ways to approach this but I will just put here the two I think are most appropriate from your question.
First is to really control from your web app what information is asked for and when. You could write this all yourself in javascript and implement caching techniques ect. There are a number of libraries out there that do most of this work for you though.
I would recommend one of the following:
OpenGeo SDK
OpenLayers
GeoExt
Leaflet
All of these have ways of controlling local caching, when to get the data and what data is gathered from the server. Most of them can also be extended to add any functionality that is missing. The top two I know support google maps (as well as a number of others) as well.
If you need to add even more control over your data locally you could even look at implementing something like PouchDB. I think this is more suited to mobile applications or instances where the network connection is either really slow or intermittent.
This sort of solution should be able to easily handle 1000's to 10000's of features with 100's of users.
If you are really going to scale up to 100000's to 1000000's of features with 100's to 1000's of users then I would suggest adding a tile server to the soloution above. The tile server will sit between your web application and your data base. Most of them have lots of caching settings and optimistions for dealing with large datasets and pushing them out to a client. Because they push out tiles rather than features the data output remains reasonably constant even as the number of features grow. The OpenGeo SDK and Openlayers libraries I mentioned above can work really well with any of the following tile servers:
GeoServer
Mapserver
MapGuide
Quantum GIS Server
If you are reluctant to do any coding there are some offers that work out of the box for enterprise environments. They are all expensive and from your question I think they are probably not what you are looking for.
Is it possible to directly use TypedArrays in three.js for custom attributes? I'm downloading a binary model format from a server, and the data is directly stored into a Float32Array. Since this is the format required by gl.bufferdata, it seems wasteful to create THREE.Vector3 objects, which only get stored back into a new Float32Array inside WebGLRenderer.js.
As a possibly unrelated issue/bug, I've profiled this binary model loading in Chrome and noticed that 60% of the time is spent in the garbage collector. This is seriously bogging down the model loading, since there are over 100k vertices in this model. This only started happening since v49 I believe. Any insight?
You can use BufferGeometry. Sadly we don't have many examples of how to use that one yet. Only CTMLoader is using it at this point. Maybe it can serve as good reference for you?