I've encountered problem with performance of loaded POD files. I'm using the ones created for online service with WebGL, so these models are pretty well detailed. The total number of models I got is large and I really want to avoid to remake them all. So, while increasing the number of models loaded in scene fps is dropping. Are there any general advices to improve performance without changing these models? I've disabled multisampling, tried to decrease textures' sizes, number of lights and other stuff like that. Also, all models are viewed by camera, so I could not use culling. These models are also different. Any suggestions?
I knew that there was something I've missed! :) saying in general, I created cocos3d template, than used my own mechanisms to add POD files. But if you will see into Scene.m there are
[self createGLBuffers];
[self releaseRedundantData];
methods in -initializeScene. And, of course, I did not use them after adding POD files. That helped to improve performance from 7 fps to 30.
Related
Our company has tried to use 3d.io with A-Frame as a solution.
This is a floorplan sample in 3d.io
https://spaces.archilogic.com/model/Kendra_Chen/5r29uqxy?modelResourceId=69db3a0d-e1fb-4fdf-b7a8-2ce4dfe30f85&mode=edit&view-menu=camera-bookmarks&main-menu=none&logo=false
I use 3d.io combined with A-Frame, which 3d.io provide here:
(change scene id to "69db3a0d-e1fb-4fdf-b7a8-2ce4dfe30f85" )
https://appcreator.3d.io/default_setup?m=nv
I found my computer's CPU is raising to 70% and up.
I also use it as static file sample here:
https://www.istaging.com/dollhouse/artgallery/
My computer's CPU still raises to 70% and up.
Is this an A-Frame's performance issue?
If yes, how can I ask A-Frame's team to enhance this?
It's a big issue and must be resolved we think.
Is this possible to enhance? Or there are too many components to handle?
I have a problem with tensorflow.
I need to create several model (e.g. neural networks), but after the computation of the parameters of such models, I will create new models, and I won't need the previous models anymore.
Seems that tensorflow is not able to recognize which model I am still using, and which ones are without reference anymore, and I don't know in which way should I delete the previous models. As result the memory keep increasing its size, until the system kills my execution, which, obviously, is something that I would like to avoid.
How do you think I should deal with this problem? What's the correct way to 'delete' the previous models?
thanks in advance,
Samuele
My React app has become incredibly laggy, and I'm trying to find (and destroy) the bottlenecks. The app updates every 10 seconds. And right now, that update is taking >100ms, which is too long.
When I went to record a timeline with the Chrome dev tools, I found that something called "Mixin.perform" was taking 107 ms. Screenshot attached.
This part confused me. Normally, I'd aim to fix whatever appears to be taking the longest. But my app doesn't have any mixins, that I know of at least. It's all written in ES6, so mixins aren't even possible.
I do use some third party components, so maybe it comes from one of those - is there any way I could tell which mixins are slowing things down? Or is there a different explanation?
The Mixin object is part of the React source code: https://github.com/facebook/react/blob/master/src/shared/utils/Transaction.js#L77
There is some description there as to what it's for. I understand it to mean it that is helping preserve state during reconciliation, the technique React uses to make rendering React applications performant enough for production use, and not just theoretically speaking.
You can read about reconciliation here:
https://facebook.github.io/react/docs/reconciliation.html
It's likely that many of your components are receiving props changes, causing re-renders, which will bubble down to their children. At the end of this cycle, React will do its thing and will call Mixin functions to help with reconciliation.
You can try to add some logging information in componentWillReceiveProps or shouldComponentUpdate to compare nextProps with this.props. There may be times when you want to return false in shouldComponentUpdate, which will reduce the amount of work React core has to do. You may also find components are receiving new props many more times then you expect.
This article helps somewhat when trying to understand why components are updating when you think they should not be:
https://facebook.github.io/react/blog/2016/01/08/A-implies-B-does-not-imply-B-implies-A.html
Good luck!
I am developing a WebGIS application using Symfony with the MapFish plugin http://www.symfony-project.org/plugins/sfMapFishPlugin
I use the GeoJSON produced by MapFish to render layers through OpenLayers, in a vector layer of course.
When I show layers up to 3k features everything works fine. When I try with layers with 10k features or more the application crash. I don't know the threshold, because I either have layers with 2-3k features or with 10-13k features.
I think the problem is related with doctrine, because the last report in the log is something like:
Sep 02 13:22:40 symfony [info] {Doctrine_Connection_Statement} execute :
and then the query to fetch the geographical records.
I said I think the problem is the number of features. So I used the OpenLayers.Strategy.BBox() to decrease the number of feature to get and to show. The result is the same. The app seems stuck while executing the query.
If I add a limit to the query string used to get the features' GeoJSON the application works. So I don't think this is related to the MapFish plugin but with Doctrine.
Anyone has some enlightenment?
Thanks!
Even if theorically possible, it’s a bad idea to try to show so many vector features on a map.
You'd better change the way features are displayed (eg. raster for low zoom levels, get feature on click…).
Even if your service answer in a reasonable time, your browser will be stuck, or at least will have very bad performance…
I’m the author of sfMapFishPlugin and I never ever tried to query so many features, and even less tried to show them on a OL map.
Check out OpenLayers FAQ on this subject: http://trac.osgeo.org/openlayers/wiki/FrequentlyAskedQuestions#WhatisthemaximumnumberofCoordinatesFeaturesIcandrawwithaVectorlayer , a bit outdated with recent browsers improvements, but 10k vector features on a map is not reasonable.
HTH,
I'm currently using CakePHP for my training plan application but I have rendering times of 800 ms per page.
Do you have any tips on improving the performance?
Thanks in advance.
br, kms
In general the tool that will give you the greatest speed boost is caching, and CakePHP has a dedicated helper. The second most useful thing to do is optimizing the queries, only asking for the data you need - this can be done with the containable behavior.
You can find more detailed tips in this post.
Install APC, if you dont have it. that will instantly make it < 500ms. also without touching a single line of code.
Make sure your tables have all the proper indexes so that queries are as fast as they can be.
Next up look at some things about caching routes / urls as that is a huge drain.
Those will give you the most speed boost for the least amount of work
Have you tried any of the CSS/JS asset combiners for CakePHP? They combine/compress/minify your CSS/JS scripts and cache them where applicable. Here's one that's quite recent.
Not specific to CakePHP but you could go through all the factors in Google Page Speed, it will help you speed up your page loading times by suggesting what scripts you could combine and advice on how to reduce requests.
Other than that, look into the containable behaviour, see if you can cut out any necessary info/queries by only selecting what you need at any time.
This question is well-populated with information on speeding Cake up.