How can I log how long each ASP.NET WebAPI call takes? - asp.net-web-api

Is there a way to attack some sort of global logger so that I know how long each WebAPI request took?

Yes, check out MiniProfiler.
It's got a lot of different adapters too- for example, I've used it with ServiceStack / Dapper, but it also works with MS vanilla WebApi / EF.
On localhost (or whatever criteria you use to turn the profiler on in your global app class), when you hit your site url's in the browser, you'll get a ui widget in the top right corner detailing total request time, serialization time, and database execution time (including the actual sql executed).
To view the last 100 requests, browse:
~/mini-profiler-resources/results
For more robust storage there are options here - sql server is supported out of the box. Use this create script and then set the storage provider for MiniProfiler:
MiniProfiler.Settings.Storage = new SqlServerStorage("..connectionstring...");
Or use Sqlite out of the box
You can even set multiple storage options.
Examples here
When adding the profiler ui to view requests, it should look something like this

Related

Why is ServiceStack caching in Service, not FilterAttribute?

In MVC and most other service frameworks I tried, caching is done via attribute/filter, either on the controller/action or request, and can be controlled through caching profile in config file. It seems offer more flexibility and also leave the core service code cleaner.
But ServiceStack has it inside the service. Are there any reason why it's done this way?
Can I add a CacheFilterAttribute, but delegate to service instead?
ToOptimizedResultUsingCache(base.Cache,cacheKey,()=> {
// Delegate to Request/Service being decorated?
});
I searched around but couldn't find an answer. Granted, it probably won't make much difference because the ServiceStack caching via delegate method is quite clean. And you seldom change caching strategy on the fly in real world. So this is mostly out of curiosity. Thanks.
Because the caching pattern involves, checking first to see if it is cached, if not to then execute the service, populate the cache, then return the result.
A Request Filter doesn't allow you to execute the service and a Response Filter means that the Service will always execute (i.e. mitigating the usefulness of the Cache), so the alternative would require a Request + Response filter combination where the logic would be split into 2 disjointed parts. Having it inside the Service, lets you see and reason about how it works and what exactly is going on, it also allows full access to calculate the uniqueHashKey used and exactly what and when (or even if) to Cache, which is harder to control with a generic black-box caching solution.
Although we are open to 'baking-in' built-in generic caching solutions (either via an attribute or ServiceRunner / base class). Add a feature request if you'd like to see this, specifying the preferred functionality/use-case (e.g. cache based on Time / Validity / Cache against user-defined Aggregate root / etc).

MiniProfiler.Current is null when called from System.Net.Http.DelegatingHandler

I'm using mini profiler in my asp.net Web API project and want to track the performance of some code that runs in a custom DelegatingHandler.
The calls MiniProfiler.Current.Step() inside the DelegatingHandler don't show up in the results. Other calls in the same project show up ok.
Further investigation revealed that MiniProfiler.Current is retrieved from HttpContext.Current in the WebRequestProfilerProvider. And HttpContext.Current is null when called from DelegatingHandler.
Is there a better way to retrieve the MiniProfiler.Current so that it works inside the handler?
MiniProfiler Timings are stored in HttpContext.Current by default (as you discovered). Thus if you are calling MiniProfiler from a place where HttpContxt.Current is null, the results cannot be saved. The solution is to save (and retrieve) the results from somewhere else.
MiniProfiler offers the option of the option of changing the location where all results should be stored and retrieved from (using MiniProfiler.Settings.Storage). The new v3 MiniProfiler (beta nuget here) offers the option of configuring different IStorage for each request, and for using a MultiStorageProvider to designate multiple locations into which results can be stored and retrieved. You can see an example of this in the Sample.Mvc project on github.
In your case, the best approach might be to set a MultiStorageProvider for your global MiniProfiler.Settings.Storage that will first save/retrieve from HttpRuntimeCacheStorage and then afterwards will use some other IStorage that is accessible from the DelegatingHandler. Then in the DelegatingHandler, set the MiniProfiler.Current.Storage to only use the second storage option that you set in the MultiStorageProvider (since it is pointless to try to save the the HttpCache). In this was, profiles from the DelegatingHandler will be saved into your second storage option, and will be retrieved for view with your other results (since MultiStorageProvider will Load results from the first place it can get them - if it doesn't find the result in HttpCache, it will go to the second option.
Note - having multiple storage options is useful in this case, but it can have a negative impact on the performance of retrieving profiles.

ProtocolViolationException Load testing web service (GET action with content-body)

I created an ASP.NET MVC4 Web API service (REST) with a single GET action. The action currently needs 11 input values, so rather than passing all of those values in the URL, I opted to encapsulate those values into a single class type and have it passed as Content-Body. When I test in Fiddler, I specify the verb as GET, and enter the JSON text in the "Request Body" input box. This works great!
The problem is when I attempt to perform Load Testing in Visual Studio 2010 Ultimate. I am able to specify the GET action and the JSON Content-Body just fine. But when I run the Load test, VS reports exceptions of type ProtocolViolationException (Cannot send a content-body with this verb-type) in the test results. The test executes in 1ms so I suspect the exceptions are causing the test to immediately abort. What can I do to avoid those exceptions? I'd prefer to not change my API to use URL arguments just to work-around the test tooling. If I should change the API for other reasons, let me know. Thanks!
I found it easier to put this answer rather than carry on the discussions.
Sending content with GET is not defined in RFC 2616 yet it has not been prohibited. So as far as the spec is concerned we are in a territory that we have to make our judgement.
GET is canonically used to get a resource. So you are retrieving this resource using this verb with the parameters you are sending. Since GET is both safe and idempotent, it is ideal for caching. Caching usually takes place based on the resource URI - and sometimes based on various headers. The point is cache implementations - AFAIK - would not use the GET content (and to be honest I have not seen any GET with content in real world). And it would not make sense to include the content in the key generation since it reduces the scalability of the caches.
If you have parameters to send, they must be in the URI since this is part of what defines that URI. As such, I strongly believe sending content with GET is wrong.
Even when you look at implementations such as OData, they put the criteria in the URI. I cannot imagine your (or any) applications requirements is beyond OData query requirements.

Subsonic, SharedDbConnectionScope and ApplicationState

I'm looking at using Subsonic with a multi-tenant ASP.net web application. There are multiple DB's (one per client/instance). The user logs in with a domain suffix to their username (e.g. user#tenant1, user#tenant2).
The custom membership provider will then determine which database a user is using, and authenticate against it. All user-initiated calls in the webapp will be wrapped in a SharedDbConnectionScope call, however I have a question regarding caching subsonic items.
Basically each instance will have a few records that rarely change (search options/configurations). I would like to read these in the Application_Start event, and cache them into the ApplicationState.
In the Application_Start event, it would loop over each client database, use a SharedDbConnectionScope to connect to each DB, and create these cached records (e.g. Application('tenant1_search_obj') = subsonic_object
When a user loads the search page, it would then check what domain a user is in, and then retreive that search option from the cache.
Is this feasible? I'm just concerned that if I cache an object, when I retrieve it from the application cache it won't know what connection its using, and might possibly pull in the wrong data.
I'd like to avoid putting this in the session object if possible.
it's possible, but probably not a good idea since it doesn't scale at all - you're going to pop a new connection for every single client whether they show up or not.
Maybe your best bet is to "lazy load" the setting - first hit on the search page loads the config into the cache or Application settings and there it stays.
Other than that - to answer your question it is possible. If you're using SubSonic 3, just create a new provider on the fly using ProviderFactory.GetProvider(connectionString, "System.Data.SqlClient") and then execute your stuff against it.
For SubSonic 2 - SharedConnectionScope is what you want.

How do you measure the progress of a web service call?

I have an ASP.NET web service which does some heavy lifting, like say,some file operations, or generating Excel Sheets from a bunch of crystal reports. I don't want to be blocked by calling this web service, so i want to make the web service call asynchronous. Also, I want to call this web service from a web page, and want some mechanism which will allow me to keep polling the server so that i can i can show some indicator of progress on the screen, like say, the number of files that have been processed. Please note that i do not want a notification on completion of the web method call, rather, i want a live progress status. How do i go about it?
Write a separate method on the server that you can query by passing the ID of the job that has been scheduled and which returns an approximate value between 0-100 (or 0.0 and 1.0, or whatever) of how far along it is.
E.g. in REST-style, you could make a GET request to http://yourserver.com/app/jobstatus/4133/ which would return a simple '52' as text/plain. Then you just have to query that every (second? two seconds? ten seconds?) to see how far along it is.
How you actually accomplish the monitoring on the backend hugely depends on what your process is and how it works.
I think XML web service is slow, so creating multiple methods and polling the progress will be extremely slow and will generate huge load on the server. I wouldn't do it in production environment. I see the same (but smaller) problems with database polling.
Try SOAP extensions instead. It implements an event-driven model. See Adding a Progress Bar to Your Web Service Client Application on MSDN.
You can also use SoapExtensions to notify your client of the download/process progress. The server can then send events to the client. Nothing in the client has to be changed if you don't use it.
Allows for something like this in your client:
//...
private localhost.MyWebServiceService _myWebService = new localhost.MyWebServiceService ();
_myWebService.processDelegate += ProgressUpdate;
_myWebService.CallHeavyMethod();
//...
private void ProgressUpdate(object sender, ProgressEventArgs e)
{
double progress = ((double)e.ProcessedSize / (double)e.TotalSize) * 100.00;
//Show Progress...
}
Have the initial "start report generation" web service call create a task in some task pool, and return the caller the ID of the task.
Then, provide another method that returns the "percent done" for a given taskId.
Provide a third method that returns the actual result for a completed task.
Easiest way would be to have the Web Service update a field on a database with the progress of the call, and then create a Web Service that queries that field and returns the value.
Make the web service to return some sort of task ID or session ID. Make another web method to query with that ID, which returns the information needed (% completion, list of files, whatever). Poll this method at some interval from the client.
Use a database to store the process information, if you do this in memory of the web service, this will not scale well in web farm environment, as it may happen that the task runs on another server, than the one you are polling.
EDIT: I just saw another similar answer, and comment to it. The commenter is right - you can use in-memory table to avoid disk operations, but still using a separate db server.

Resources