I need to collect data on firefox CPU usage during web-development coding session and I'm wondering if it is possible to monitor CPU usage of particular firefox plugin.
Right now i'm using windows' perfmon.msc, but it will only allow me to monitor firefox process as a whole.
Do you know any tools that would allow me to get CPU data from a plugin? Is it possible at all ?
You could analyze the CPU usage using Process Explorer. Right-click on the Firefox process and select properties. On the Threads tab you will see the different threads including add-ins such as Flash or Acrobat with their CPU usage listed.
EDIT: In fact, it should be possible to monitor threads with perfmon, too: Right-click to select Add Counters... and then choose Threads as performance object.
I'd guess your best option would be to test your plugin in a seperate Firefox process, but you're probably doing that anyway.
For real profiling you should use Firebug. I'm not sure about it, but I think it is possible to run XUL apps inside of Firefox (without integrating it as a plugin). If this is not an option then you could maybe separate out code that you suspect to be slow into a web page and profile it with Firebug. This would of course only work for stuff that is not interacting with the Mozilla core.
Actually Firefox does have a built-in "Task-Manager" for a few years now. Just type about:performance in the URL. It shows Name, Type, Energy Impact and Memory of each tab and add-on.
If you want to dig deeper Shift + F5 opens the performance tool where you can record e. g. opening a website and look into timings etc.
There are some JS profilers which also profile extension JS, however they don't really help on finding problematic addons.
There was a feature on the concept design of Firefox 4, however it's dumped as FX4 is feature-frozen now. But I'm still after that feature and wish to follow any progress in that direction.
Here is a question to find more about it;
https://superuser.com/q/218733/46962
For CPU utilization, you can collect the data using MS Perfmon, which is a part of Windows, and also used for similar purposes, like collecting CPU performance & stats data on SQL server for optimization.
Now in Firefox 94 there are such tools:
about:performance can show you CPU (Energy Impact) and Memory usage for all tabs and addons.
about:memory will allow you to make recordings of processes' resource usage statistics, then you can filter it knowing CPU-hungry process id, which you get from htop or top command. In the recorded snapshot you'll see extensions and their unique ids and can use it to identify and remove the extension.
Related
Some sites opened in Firefox take 90% of CPU usage.
Is there some diagnostic utility or plugin to know what these sites are?
If site is known, is it possible to know what script, plugin or something else is reason of 90% CPU usage?
There are also many other issues that can cause high CPU usage outside of Firefox itself. I usually find this type of problem has to do with available memory. If you don't have much available memory left, then the CPU usage is caused by the OS thrashing (i.e. moving pages between virtual and real memory). What I do is to kill Firefox / Chrome with the task manager, and then have Firefox / Chrome restore the pages. Then I clean up the number of tabs I have open. Another thing that can cause CPU/memory usage issues are badly designed plug-ins. Disable them and see if you still have an problem.
Since Apple have dropped Spin Control.app—the unresponsive state process monitoring and logging utility—I am in need of a replacement.
I know I can use spindump directly, but I really liked the automated graphical front-end for it.
Can anyone suggest a good replacement?
My needs are: automatically sample my process by name and store the log when the process becomes unresponsive.
I've been desperately looking for a spin control alternative, and I noticed within Intruments there's a tool called "spin monitor". Annoyingly, this isn't shown within the initial template wizard, you have to find it by clicking library in the titlebar first (see screenshot):
I'm not particularly experienced with development in XCode so this answer might not be what you are looking for, but after a while searching and reading up on Spin Control.app I found this article in the Apple Developer documentation.
It appears that there is no single tool that replaces Spin Control.app. Instead, Apple wish for you to now use a combination of Instruments, their unified collection of utilities for debugging and monitoring iOS/OSX apps, and Shark, an advanced tool for sampling/tracing a single application or all running applications.
In particular, you may find this feature of Shark to be useful:
Shark also offers the windowed time facility feature for several of
its sampling options. The windowed time facility tells Shark to record
and process a finite buffer of the most recently acquired samples.
This feature lets Shark record data continuously and for long periods
of time in the background. You can then direct Shark to stop sampling
only after something interesting occurs in your code
A guess
Whilst Spin Control.app is no longer present, the operating system might respect changes to preferences in the com.apple.spincontrol domain.
If that is true, then a change to the value for hangDelay could effectively change the delay before sampling begins:
sh-3.2$ defaults read com.apple.spincontrol
{
hangDelay = 5;
watchOnlyApplication = 0;
}
sh-3.2$ defaults write com.apple.spincontrol hangDelay 10
sh-3.2$ defaults read com.apple.spincontrol
{
hangDelay = 10;
watchOnlyApplication = 0;
}
sh-3.2$
Less likely
For when a single application was watched, I guess that the value for watchOnlyApplication could be either the name or PID of the process (thanks to #Aeyoun for a comment). There's a description of the feature but not the values in a 2006 MacTech article, OS X Investigation and Troubleshooting - Part 2: The Secrets to OS X success.
I doubt that setting watchOnlyApplication to anything other than 0 can have any use on a system that is without Spin Control.app.
You can see all the spin dumps within Console.app under the section Spin Reports
You may use Activity Monitor to manually run spin control:
Klick on the process you'd like to observe
Use Menu View > Sample Process
Use Menu View > Run Spindump
As of macos 12.
On mac for instance:
Why do text editors use 30MB RSS just for a text pane, less than 1K chars, open/save, find/replace and a few other basic functions ? They were using way less a few years before, while their functionality did not change.
Why is Firefox using 500~1000MB rss when you browse a few webpages of a few hundreds KB each ? Why does it uses 300~500MB just to startup, even with no addon ?
Why does Safari acts the same, even if it is supposed to be using cocoa libraries which should be shared and in VSZ and not RSS ?
There are many answers to this and I'm procrastinating, so here it goes :)
For better or for worse, the system requirements of software increase because those developing it feel that more hardware resources are available to the typical user of their software. This means:
Less development resources can be spent on fine-tuning (e.g. using higher-level programming frameworks, spending less developer time on optimizing resource usage vs implementing new features and fixing behavior bugs).
New features can be added to the software (I don't know about text editors, but you'd probably be surprised if someone counted the number of new web platform features browsers added support for during the last few years.)
Different memory/performance tradeoffs can be made (i.e. caching more stuff in memory)
Memory usage of simple apps on Mac - see Why do Cocoa apps use so much memory? . Basically, your understanding of Resident set size is quite simplistic.
Browsers' memory usage
Memory usage mainly depends on the content the browsers have to display. You might think you have a "few hundred kb" page loaded, when in fact a typical web page is an application with code for handling or tracking your clicks, a few sub-applications (one for each "like" and "+1" button, or for ads), with another application for the flash applet embedded on the page, etc.
Software is hard, and browsers are very complex in particular (e.g. Firefox has more than 9M lines of code according to ohloh). So an easy optimization can cost a lot more than you might think. A recent example I've seen (681201): when you restart Firefox and it's set to not load the pages in tabs before you switch to a tab, each "empty" tab still uses several hundreds of KB. This is because every "empty" tab actually has a blank HTML document loaded into it and a full-featured JavaScript environment set up, ready to execute code.
Seems easy to fix (just don't create a blank document for empty tabs!), but changing this requires auditing much of the browser code that works with tabs to properly handle the "empty tab" case, and worse, requires changes to add-ons that depend on every tab having a document. So while gradual improvements are being made (down to 160K per tab from more than a megabyte), it's not as easy as it sounds.
How would I go about determining what the hangups are in my javascript app when the profiler puts (program) at the top with 80%? Is my logic too complex for the hotspot tracking to occur? Is my memory footprint too big? What is generally the cause of this?
More Information:
There are no elements on the form save the one canvas tag
There are no waiting communications (xhr)
http://i.imgur.com/j6mu1.png
Idle cycles ("doing nothing") will also render as "(program)" (you may profile this SO page for a few seconds and get 100% of (program)), so this is not a sign of something bad in itself.
The other thing is when you actually see your application lagging. Then (program) will be contributed by the V8 bindings code (and the WebCore code they invoke, which is essentially anything: DOM/CSS operations, painting, memory allocations and GCs, what not.) If that is the case, you can record a Timeline of your application (switch to the Timeline panel in Developer Tools and press the Record button in the bottom status bar, then run your application for a while.) You will see many internal events with their timings as horizontal bars. You will see reflows, style recalculations, timers fired, GC events, and more (btw, the latest Chromium versions have an improved memory profiler utilization timeline, so you will be able to monitor the memory used by certain internal factors, too.)
To diagnose memory problems (multiple allocations entailing multiple Full GC cycles) you may use the Profiles panel. Take a heap snapshot before the intensive part of your code starts, and another one after this code has run for some time. Then compare the two heapshots (the right SELECT at the bottom) to see which allocations have taken place, along with their memory impact.
To check if it's getting slow due to a memory option use: chrome://memory
Also you can check chrome://profiler/ for possible hints of what is happening.
Another option is to post your javascript code here.
See this link : it will help you in Understanding Firebug profiler output
I would say you should check which methods taking %. You can minimize unwanted procedures from them. I saw in your figure given some draw method is consuming around 14% which is running in background. May be because of that your JS loading slowly. You should determine what´s taking time. Both FF and Chrome has a feature that shows the network traffic. Have a look at yslow as well, they have a great addon to Firebug.
I would suggest some Chome's auditing tools which can tell you a lot about why is this happening, you should probably include more information about:
how long did it take to connect to server?
how long did it take to transfer content?
how much other stuff are you loading on that page simultaneously?
anyway even without all that, here's a checklist to improve performance for you:
make sure your javascript is treated and served as static content, e.g. via nginx/apache/whatever directly or cdn, not hitting your application framework
investigate if you can make use of CDN for serving javascript, sometimes even pointing different domain names to your server makes a positive impact, e.g. instead of http://example.com/blah.js -> http://cdn2.example.com/blah.js
make sure your js is served with proper expiration headers, don't re-download it every time client refreshes a page
turn on gzipping of js content
minify your js using different tools available(e.g. with Google closure compiler)
combine your scripts (reduces the number of requests)
put your script tags just before
investigate and cleanup/optimize your onload and document.ready hooks
Have a look at the YSlow plugin and Google PageSpeed, both very useful in improving performance.
Is anybody know a good testing tool that can produce a graph containing the CPU cycle and RAM usage?
What I will do for ex. is I will run an application and while the application is running the testing tool will record CPU cycle and RAM Usage and it will make a graph as an output.
Basically what I'm trying to test is how much heavy load an application put on RAM and CPU.
Thanks in advance.
In case this is Windows the easiest way is probably Performance Monitor (perfmon.exe).
You can configure the counters you are interested in (Such as Processor Time/Commited Bytes/et) and create a Data Collector Set that measures these counters at the desired interval. There are even templates for basic System Performance Report or you can add counters for the particular process you are interested in.
You can schedule the time where you want to execute the sampling and you will be able to see the result using PerfMon or export to a file for further processing.
Video tutorial for the basics: http://www.youtube.com/watch?v=591kfPROYbs
Good Sample where it shows how to monitor SQL:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
Loadrunner is the best I can think of ; but its very expensive too ! Depending on what you are trying to do, there might be cheaper alternatives.
Any tool which can either hook to the standard Windows or 'NIX system utilities can do this. This has been a defacto feature set on just about every commercial tool for the past 15 years (HP, IBM, Microfocus, etc). Some of the web only commercial tools (but not all) and the hosted services offer this as wekll. For the hosted services you will generally need to punch a hole through your firewall for them to get access to the hosts for monitoring purposes.
On the open source fron this is a totally mixed bag. Some have it, some don't. Some support one platform, but not others (i.e. support Windows, but not 'NIX or vice-versa).
What tools are you using? It is unfortunately common for people to have performance tools in use and not be aware of their existing toolset's monitoring capabilities.
All of the major commercial performance testing tools have this capability, as well as a fair number of the open source ones. The ability to integrate monitor data with response time data is key to the identification of bottlenecks in the system.
If you have a commercial tool and your staff is telling you that it cannot be done then what they are really telling you is that they don't know how to do this with the tool that you have.
It can be done using jmeter, once you install the agent in the target machine you just need to add the perfmon monitor to your test plan.
It will produce 2 result files, the pefmon file and the requests log.
You could also build a plot that compares the resource compsumtion to the load, and througput. The throughput stops increasing when some resource capacity is exceeded. As you can see in the image CPU time increases as the load increases.
JMeter perfmon plugin: http://jmeter-plugins.org/wiki/PerfMon/
I know this is an old thread but I was looking for the same thing today and as I did not found something that was simple to use and produced graphs I made this helper program for apachebench:
https://github.com/juanluisbaptiste/apachebench-graphs
It will run apachebench and plot the results and percentile files using gnuplot.
I hope it helps someone.