I am creating a three.js powered website, and on some browsers (I am looking at you firefox), if other tabs are also running webGL, my performance stutters.
I would like to know if there is a way to find out (in the browser) if other tabs are consuming webgl services so that I can alert the user.
I appreciate any and all comments!
That would be a security violation, so no, you can't do that.
Update:
I'll just add, you could include stats.js (more likely, you'll need to do something very similar to establish to what stats.js is doing to get an idea of what the average frame rate is like and look for dips in that performance), and if that is regularly dropping then alert the user. That said, you are likely to get the calculation wrong, and there are always many variables you can't control which can affect performance. Most of those can be resolved with a browser restart, particularly Firefox doesn't seem to be releasing its GPU memory across page reloads. When that memory is full the performance drops badly.
In any case, any warnings you give out should probably not be intrusive for the user in any way.
Also note that properly written WebGL programs (using requestAnimationFrame as intended) should to my knowledge not be consuming much in compute resources when the tab is in the background, though this may also vary per browser. But a tab in the background will still consume the memory (GPU memory, and JavaScript code and objects).
Related
I am looking for some advice on how to track application performance; the application is developed using ReactJS, and I am building it with webpack.
First of all I will just present what I have done and what the application is expected to do:
I need to render a lot of, let's just call them widgets, that update real time presenting a lot of data. So, on a scale, I would say each widget renders about 50 to 80 values, these updates might be received from the server side all at once, so they should happen instantly when data is received. Consider I might have around 25 to 30 widgets that need to update real time.
Let me tell you a little bit about the implementation:
I have used the smart/dumb pattern for ReactJS components
The actual data is stored in application state and is distributed by the smart components to dumb components through props
I am using Pure Render Mixin to avoid unnecessary rendering
Also using Immutable data so that I will ensure Pure Render Mixin is working accordingly, that is, being accurate in determining if a render is necessary and at the same time be fast, really fast.
There are no weird bindings of callbacks, that might determine re-rendering of components, this is double checked already.
Now the issues I am having:
with about 5-6 widgets, meaning around 400-500 values that need to render each second, it works very well in Chrome and decent in Firefox.
adding about 25-30 widgets gets the application to still work decent in Chrome, but it starts to act slow in Firefox, by slow I mean user interaction that might even get a delay of around 1 second. That is really unacceptable.
What I have tried:
use Chrome dev tools to measure the performance; that didn't help too much, what I could see though, is that everything is alright. And there is no way I could read all the graphics this tool provides. (And I've read a lot of articles about it)
tried to use Firebug in Firefox. That's an amazing tool, but not in this case; just by opening it with the above mentioned load (30 widgets) gets Firefox to freeze... and the profiler gave me nothing)
on a last resort, I have used the default dev tools from Firefox, it has a performance tab. That got me some information of what parts of the application has the most load on the browser. It seemed it was some heavy computing determining min/max of an Immutable.List.
Unfortunately the application still has performance issues, and it is of high importance to get it working perfect, and Firefox profiler doesn't give me any other leads.
So my questions would be:
what would be the next action to take in order to determine performance issues? (as much as possible where they are: class/method/at least file)
did you guys use any performance testing tool that gives you an insight of what the hell is happening?
is there something else to consider to improve the overall functionality, especially targeting multiple browsers? (Firefox, Chrome, IE11)
So I am backend developer who is dipping my toes into developing a responsive site that, for SEO reasons, needs to be able to show up to a 1000 responsive "containers" for search results, ie...
[1]: http://107.6.139.93/Melbourne.homes
So what seems to be happening is that the browser is locking up trying to render all these containers or something? For searches with less then 300 results, the delay is tolerable ie...
[2]: http://107.6.139.93/Viera.homes
To be honest, i'm in somewhat over my head here (im a database guy) and i'm trying to learn, but have no idea if its going to even be possible to improve performance without using pagination (something my clients is very much against)
I'm wondering if anyone here has any insights into my issues.
EDIT - the same "lock-up" delay occurs when you resize the browser and wait for the responsive-ness to kick in
I think your overloading the browsers memory, this you can't solve with css. it's the whole package (images and content).
You could solve this by using infinite scroll and thus only load content when the user scrolls. There are some things you have to look in to before throwing yourself into infinite srcolling especially on SEO level.
You might want to read this:
http://googlewebmastercentral.blogspot.nl/2014/02/infinite-scroll-search-friendly.html
I have a AngularJS application that makes use of web services to load the content and render the view. I've checked the performance of application on Chrome and it is running fast enough. However on IE 8, I'm facing huge performance issues.
Noticed the memory usage for Internet Explorer. Found that it was around 200MB. If I opened multiple instances of the application in separate tabs, the memory usage kept on doubling with each instance. This is slowing down the response time and in the effect the whole PC performance. No such issues are there on Chrome.
I feel that it is because of the data that is present in the model for the application. So with each tab on IE, the model data is doubled and the RAM usage also increases. However, this problem doesn't occur for me in Chrome.
Please suggest some performance optimization techniques that I can use.
Try to avoid the use of "ng-repeat" for a large list rendering. Or, you can use the infinite scrolling for the same.
How would I go about determining what the hangups are in my javascript app when the profiler puts (program) at the top with 80%? Is my logic too complex for the hotspot tracking to occur? Is my memory footprint too big? What is generally the cause of this?
More Information:
There are no elements on the form save the one canvas tag
There are no waiting communications (xhr)
http://i.imgur.com/j6mu1.png
Idle cycles ("doing nothing") will also render as "(program)" (you may profile this SO page for a few seconds and get 100% of (program)), so this is not a sign of something bad in itself.
The other thing is when you actually see your application lagging. Then (program) will be contributed by the V8 bindings code (and the WebCore code they invoke, which is essentially anything: DOM/CSS operations, painting, memory allocations and GCs, what not.) If that is the case, you can record a Timeline of your application (switch to the Timeline panel in Developer Tools and press the Record button in the bottom status bar, then run your application for a while.) You will see many internal events with their timings as horizontal bars. You will see reflows, style recalculations, timers fired, GC events, and more (btw, the latest Chromium versions have an improved memory profiler utilization timeline, so you will be able to monitor the memory used by certain internal factors, too.)
To diagnose memory problems (multiple allocations entailing multiple Full GC cycles) you may use the Profiles panel. Take a heap snapshot before the intensive part of your code starts, and another one after this code has run for some time. Then compare the two heapshots (the right SELECT at the bottom) to see which allocations have taken place, along with their memory impact.
To check if it's getting slow due to a memory option use: chrome://memory
Also you can check chrome://profiler/ for possible hints of what is happening.
Another option is to post your javascript code here.
See this link : it will help you in Understanding Firebug profiler output
I would say you should check which methods taking %. You can minimize unwanted procedures from them. I saw in your figure given some draw method is consuming around 14% which is running in background. May be because of that your JS loading slowly. You should determine what´s taking time. Both FF and Chrome has a feature that shows the network traffic. Have a look at yslow as well, they have a great addon to Firebug.
I would suggest some Chome's auditing tools which can tell you a lot about why is this happening, you should probably include more information about:
how long did it take to connect to server?
how long did it take to transfer content?
how much other stuff are you loading on that page simultaneously?
anyway even without all that, here's a checklist to improve performance for you:
make sure your javascript is treated and served as static content, e.g. via nginx/apache/whatever directly or cdn, not hitting your application framework
investigate if you can make use of CDN for serving javascript, sometimes even pointing different domain names to your server makes a positive impact, e.g. instead of http://example.com/blah.js -> http://cdn2.example.com/blah.js
make sure your js is served with proper expiration headers, don't re-download it every time client refreshes a page
turn on gzipping of js content
minify your js using different tools available(e.g. with Google closure compiler)
combine your scripts (reduces the number of requests)
put your script tags just before
investigate and cleanup/optimize your onload and document.ready hooks
Have a look at the YSlow plugin and Google PageSpeed, both very useful in improving performance.
When writing DirectX applications, obviously it's desirable to support the user suspending the application via Alt-Tab in a way that's fast and error-free. What is the best set of practices for ensuring this? Things that need to be addressed include:
The best methods of detecting when your application has been alt-tabbed out of and when it has been returned to.
What DirectX resources are lost when the user alt-tabs, and the best ways to cope with this.
Major things to do and things to avoid in application architecture for purposes of alt-tab support.
Any significant differences between major DirectX versions as they apply to the above.
Interesting tricks and gotchas are also good to hear about.
I will assume you are using C++ for the purposes of my answers, but if you can afford to use C#, XNA (http://creators.xna.com/) is an excellent game platform that handles all of these issues for you.
1]
This article is helpful for windows events in the window procedure to detect when a window loses or gains focus, you could handle this on your main window: http://www.functionx.com/win32/Lesson05.htm. Also, check out the WM_ACTIVATEAPP message here: http://msdn.microsoft.com/en-us/library/ms632614(VS.85).aspx
2]
The graphics device is lost when the application loses focus from full screen mode. Microsoft offers an article on how to handle this: http://msdn.microsoft.com/en-us/library/bb174717(VS.85).aspx This article also has a lost device tutorial: http://www.codesampler.com/dx9src/dx9src_6.htm
DirectInput can also have a device lost error state, here is a link about that: http://www.toymaker.info/Games/html/directinput.html
DirectSound can also have a device lost error state, this article has code that handles that: http://www.eastcoastgames.com/directx/chapter2.html
3]
I would make sure to never disable Alt-Tab. You probably want minimal CPU load while the application is not active because the user probably Alt-Tabbed because they want to do something else, so you could completely pause the application, or reduce the frames rendered per second. If the application is minimzed, you of course don't need to render anything either. After thinking about a network game, my best solution is that you should still reduce the frames rendered per second as well as the amount of network packets handled, possibly even throwing away many of the packets that come in until the game is re-activated.
4]
Honestly I would just stick to DirectX 9.0c (or DirectX 10 if you want to limit your target operating system to Vista and newer) if at all possible :)
Finally, the DirectX sdk has numerous tutorials and samples: http://www.microsoft.com/downloads/details.aspx?FamilyID=24a541d6-0486-4453-8641-1eee9e21b282&displaylang=en
We solved it by not using a fullscreen DirectX device at all - instead we used a full-screen window with the top-most flag to make it hide the task bar. If you Alt-Tab out of that, you can remove the flag and minimize the window. The texture resources are kept alive by the window.
However, this approach doesn't handle the device lost event happening due to 'lock screen', Ctrl+Alt+Delete, remote desktop connections, user switching or similar. But those don't need to be handled extremely fast or efficiently (at least that was the case in our application)
All serious D3D apps should be able to handle lost devices as this is something that can happen for a variety of reasons.
In DX10 under Vista there is a new "Timeout Detection and Recovery" feature that makes it common in my experience for graphics devices to be reset which would cause a lost device for your app. This seems to be improving as drivers mature but you need to handle it anyway.
In DX8 and 9 (and 10?) if you create your resources (vertex and index buffers and textures mainly) using D3DPOOL_MANAGED they will persist across lost devices and will not need reloading. This is because they are stored in system memory and the DX runtime copies to video memory automatically. However there is a performance cost due to the copying and this is not recommended for rapidly changing vertex data. Of course you would profile first to determine if there is a speed issue :-)