I have a need for a tool that would monitor and more importantly log requests on IIS. This tool would have to report basic info about requests such as date/time of request, time spent for request, kbytes transferred... etc
What do you people use for such monitoring?
You should extend and add all of the IIS properties you want to log.
To do this, do the following:
Go into IIS
Select properties on
your website.
Under the website tab,
choose properties in the logging
section.
Select the Extended
Properties tab.
Select extended
properties
Select all of the
items you want to log.
Reset IIS.
You can now use a log parser to look through the log. http://www.smartertools.com/ has a decent one called smarter stats, and is free for a small site.
You can use IIS's log files and read them using Log Parser (free download from MS).
In response to comment: the format of the IIS log file can be found here: IIS Log File Format (IIS 6.0) and here.
IIS Log files + log analysers.
Log analysers like webtrends will give you a lot of information.
You have a look on Operations Manager from System Center family. - http://www.microsoft.com/systemcenter/operationsmanager/en/us/default.aspx
Please see the eginnovations IIS web server monitoring tool - http://www.eginnovations.com/web/iismonitor.htm
Related
I have a REST api.
It offers the services get person, get price, get route
how can I determine how long does each call on each of this services take?
For example get person is very fast=ms 5; get route takes 2sec as it needs to make a remote call to Google API.
I could get the time at the beginning of the request and just before the response is submitted, compute the difference and log that to a database.
But that would be pretty much overhead, so how would you do it? would you do it at all, or just rely on on-machine profiling? what tools would you use that minimize overhead?
What I want is to determine if there is any component that in production could have low availability.
Thank you
So it looks like you want 2 things:
Minimal impact on your production environment
Figuring out how much each request takes
In that case I would go for the IIS logs. Windows Azure Diagnostics you can get this out-of-the-box by adding the module and configuring it. As a result your IIS logs will be stored in your storage account.
After that you can download these logs and use Log Parser to execute some interesting queries which allow you to find the slowest pages, pages with most hits, pages with most exceptions... Log Parser can be a little hard to work with if you never used it before. Take a look at the blog post by Scott Hanselman covering the Log Parser Lizard GUI tool: Analyze your Web Server Data and be empowered with LogParser and Log Parser Lizard GUI:
This powerful tool can give you all the information you need with minimal impact on your production instances.
When enabling performance counters in Windows Azure Diagnostics I have to specify the counters using some magic string literals like \Processor(_Total)\% Processor Time. I can't find a list of possible string literals.
Is there a list anywhere?
Adding my comment as an answer at behest of #sharptooth :)
Once you RDP into your VM, open up command prompt and type "typeperf -q" to list all the available performance counters on your VM. As #Sandrino Di Mattia mentioned, you can save the result in a text file by using "typeperf -q > counters.txt".
Please note that you may get different performance counters depending on the kind of role VM is hosting - Web Role or Worker Role.
This is the Windows Performance monitoring infrastructure.
However you are correct that it is very hard to find something real list over on internet. But here are couple of links that will help you.
List of Performance Counters to use with Azure Web Roles
Good List of Performance Counters
Rest is searching the internet with your favorite search engine.
Go to the Azure server with Remote Desktop Connection. Run perfmon and add a new counter, voila there's your list.
For what it's worth, based on Gaurav Mantri's comment this is the list that I get from my Web role. This is the output of typeperf -q. I assume different Azure roles have different counters, and they may even vary between Web roles. Ours is a Medium size.
The list is too big for a post on SO, so here's a github gist:
https://gist.github.com/seibar/74b376aa1c57f2f7c2fd
I'm hoping to get some definitive answers on this great question. When writing web applications in .net, should you use the append feature in the IIS access log to store what you want or should you write to the windows event log?
I'm talking about informational logging, warning logging and error logging. Would it be smart to split it up and write to both the iis log and event log?
Or for that matter for normal .net windows applications, should they write to their own log file or use the event log?
I would advice you to not put your application specific log information in the IIs log file.
That log file is specific to the web server and the format of that log file is determined by the log settings in IIs. 3rd party log analyzing tools are available for it, and you may ending up in a situation where these tools can not parse the IIs log files because information you have put there could be mis-formatted.
Application specific log information is better to separate into either the Event log or a dedicated log file. Whether it should be the Event log or not is really a matter of preference. The Event log information could more easily be parsed and filtered by the Event Viewer and therefore could be much easier to deal with. It's a well known format that you can send to other people for further investigation and they can easily load that up into their Event Viewer. Excellent choice for support cases.
If you expect a log of loggin it's preferable to create dedicated application specific Event log so you don't clutter the generic Application Event log.
I would say that this is true for Windows desktop applications as well.
If you don't already, I would also recommend you to use one of the well known logging frameworks such as Log4Net or the Enterprise Logging Block. It will save you a lot of time and pain.
I want to benchmark a new server using historical HTTP-request data. I have a textfile that contains one day's worth of real historical requests to a production server. What is the best tool for sending that list of requests on the server I'm testing? The tool I use should be able to configure the following:
Number of threads making the requests
Number of requests/second sent
A list of request URLs to use when making the requests.
Apache Bench seems like a close fit. However, Bench does not seem to be able to take in a list of request URLs as a parameter. What would you recommend?
I have been using http_load to pretty good success.
http://acme.com/software/http_load/
Http_load is a Unix command line tool that allows you to specify the number of requests/second and the number of threads to use when running. It pulls urls from a text file that you specify in the command. The tool is very similar to Apache Bench, with the big difference being that http_load allows you to use a list of URLs to be used when making requests. Apache Bench makes request to a single URL only.
If all your requests are GET requests (no POST) then you can use the JMeter Access Log Sampler. Here are some straight forward step-by-step instructions on how to set it up. It will run through your requests either in order or using a number of concurrent threads and you can specify how many requests should run. Then you can use the other features of JMeter like reports to analyze the results.
I would reccomend Visual Studio Test Edition. It would be a relativley simple matter to create a coded webtest that loads your url's for testing.
This advice predicates a knowledge of C# or VB for coding and the ability to install and licence Visual Studio. Visual Studio does have a trial edition available so you can have a taste of what you are getting first.
Visual Studio does not require the target site to be running any particular hardware or software, but it does provide more information on the load of the server due to the use of Perfmon counters and any ASP.Net application will provide more detail on the running app.
The caveat to this is that I have not actually used any other web testing software.
My agile team will be adding new features to a existing realty website. As we add the features we want to have a better handle on the site's overall performance as well as the performance of particular pages.
I would like to automate the gathering of performance metrics on a request/response basis for each page (e.g. what sub requests are sent out by the browser, how many are there, how much data is transferred, and how long did each request take to fulfill).
Firebug currently captures this information in its net panel, however, I haven't found any way to programmatically pull this information out.
Does anyone know of a way to pull this information out after a page has loaded?
We are currently running our user acceptance tests with Selenium and I have considered adding this feature to the selenium interface so that our tests could run and collect the data without starting any other service.
All suggestions are welcome, including ones that leverage other tools/methods to gather the performance metrics.
Thank you.
Jan Odvarko has written a Tutorial on how to use the new listener functionality within Firebug to log net panel results:
"Since Firebug 1.4a13 the Net panel introduces, among other things, several new events that allow to easily collect all network requests and also related info gathered and computed by Firebug.
This functionality should be useful also in cases where Firebug extensions want to store network activity info into a local database or send it back to the server for further analysis (I am thinking about performance statistics here)."
Take a look at the NetExport extension for FireBug.
Steps:
enable autoexport in preferences( you can automate this one as well)
choose the folder where the data is to be added
Read the file
While it isn't directly a Firebug solution, perhaps something like Jiffy would help?
Jiffy pretty much works like a server based version of Firebug's measurement tools. I haven't used it in anger yet, but it may do what you're looking for?
http://code.google.com/p/jiffy-web/
Jiffy allows developers to:
measure individual pieces of page rendering (script load, AJAX execution, page load, etc.) on every client
report those measurements and other metadata to a web server
aggregate web server logs into a database
generate reports
There is a way to use ySlow to beacon out performance data to a URL of your choice. It's not well documented, the only info I found was here:
http://tech.groups.yahoo.com/group/exceptional-performance/messages/490?threaded=1&m=e&var=1&tidx=1
Aside from that I would look into writing a Firebug plugin, I think you can access most Firebug properties. Here's a tutorial: http://www.firephp.org/Reference/Developers/ExtendingFirebug.htm
Ben,
I've done this by extended Selenium RC's ProxyHandler to queue up the URLs seen and then allow you to pull them down via some other API. It requires that you proxy everything, which isn't the default behavior of Selenium. The nice thing is that Selenium ends up being both the place to drive automation and collect the results seen.
This is probably a feature we'll soon add to Selenium RC right after we get 1.0 out the door (we're very close!).
Okay I admit this is not a direct answer but how about going right to the source? Cut out FireBug and go to the web server. Can the server log events with sufficient granularity to allow calculation of the information you require? Parsing the log file into useful data should not be particularly difficult and has the advantage of being user-platform independent and has the potential to log a greater set of data than that offered by FireBug (Awesome tool btw).