I'm using mini profiler in my asp.net Web API project and want to track the performance of some code that runs in a custom DelegatingHandler.
The calls MiniProfiler.Current.Step() inside the DelegatingHandler don't show up in the results. Other calls in the same project show up ok.
Further investigation revealed that MiniProfiler.Current is retrieved from HttpContext.Current in the WebRequestProfilerProvider. And HttpContext.Current is null when called from DelegatingHandler.
Is there a better way to retrieve the MiniProfiler.Current so that it works inside the handler?
MiniProfiler Timings are stored in HttpContext.Current by default (as you discovered). Thus if you are calling MiniProfiler from a place where HttpContxt.Current is null, the results cannot be saved. The solution is to save (and retrieve) the results from somewhere else.
MiniProfiler offers the option of the option of changing the location where all results should be stored and retrieved from (using MiniProfiler.Settings.Storage). The new v3 MiniProfiler (beta nuget here) offers the option of configuring different IStorage for each request, and for using a MultiStorageProvider to designate multiple locations into which results can be stored and retrieved. You can see an example of this in the Sample.Mvc project on github.
In your case, the best approach might be to set a MultiStorageProvider for your global MiniProfiler.Settings.Storage that will first save/retrieve from HttpRuntimeCacheStorage and then afterwards will use some other IStorage that is accessible from the DelegatingHandler. Then in the DelegatingHandler, set the MiniProfiler.Current.Storage to only use the second storage option that you set in the MultiStorageProvider (since it is pointless to try to save the the HttpCache). In this was, profiles from the DelegatingHandler will be saved into your second storage option, and will be retrieved for view with your other results (since MultiStorageProvider will Load results from the first place it can get them - if it doesn't find the result in HttpCache, it will go to the second option.
Note - having multiple storage options is useful in this case, but it can have a negative impact on the performance of retrieving profiles.
Related
I'm the maintainer of the Miniprofiler Glimpse plugin and with the latest Miniprofiler versions I'm not able to push data to Glimpse because the Profiler is not yet populated (in previous versions it was) when the GetData() method of the tab is called.
Right now what I do is wrap the Miniprofiler Storage and when the Save() method is called, all the needed information is there but it's too late and I don't know how to send it to the tab.
So, what is the best approach (if possible) to add this information to a tab when it's ready in Miniprofiler?
Unfortunately EndRequest is currently the last moment you can subscribe on to return the necessary data. That is the moment when Glimpse will finalize its monitoring for the given request and the moment it will persist that information to the persistence store.
Although in v1 it is possible to add data after the EndRequest but only when using the default in memory store. So you could return your wrapper, which will be empty at that moment, and it will be stored in memory, allowing you to change the wrapped content afterwards.
But the above will not work for other persistence stores. We might also change this in v2 to make it deterministic, independent of the persistence store being used.
Maybe you could have your wrapper ask MiniProfiler to calculate the results at that moment, so they can be stored, even though those results might not be 100% complete?
Is there a way to attack some sort of global logger so that I know how long each WebAPI request took?
Yes, check out MiniProfiler.
It's got a lot of different adapters too- for example, I've used it with ServiceStack / Dapper, but it also works with MS vanilla WebApi / EF.
On localhost (or whatever criteria you use to turn the profiler on in your global app class), when you hit your site url's in the browser, you'll get a ui widget in the top right corner detailing total request time, serialization time, and database execution time (including the actual sql executed).
To view the last 100 requests, browse:
~/mini-profiler-resources/results
For more robust storage there are options here - sql server is supported out of the box. Use this create script and then set the storage provider for MiniProfiler:
MiniProfiler.Settings.Storage = new SqlServerStorage("..connectionstring...");
Or use Sqlite out of the box
You can even set multiple storage options.
Examples here
When adding the profiler ui to view requests, it should look something like this
I'm using the latest ASP.Net WebAPI Nightly builds (dated 2013-01-16).
I have a simple EF database first model at the moment that has two entities - Patients and Visits. Each patient can have many visits.
I'd like to be able to query for my list of patients and have the visits entities for each patient returned inline. I know that WebAPI's OData implementation doesn't yet support $expand. I'm hoping that just means that optional client-controlled expansion is not supported and that I can force expansion server-side.
At the moment I'm not getting any of the visits inline.
For example, my PatientController's() Get() method looks like
[Queryable(AllowedQueryOptions=AllowedQueryOptions.Supported)]
public override IQueryable<Patient> Get()
{
var query = this.entities.Patients.Include("Visits");
return query;
}
I've verified that the query executing against my database does indeed include the visit information.
To use a publicly available OData service as an example, if you use the service at http://services.odata.org/OData/OData.svc/, you can get a list of Suppliers. This is http://http://services.odata.org/OData/OData.svc/Suppliers.
You can also ask for a list of suppliers that includes the list of products using http://http://services.odata.org/OData/OData.svc/Suppliers?$expand=Products
Stepping through the ASP.NET code (via the symbols server) I've got to the System.Web.Http.OData.Formatter.Serialization.ODataEntityTypeSerializer and can see that it's CreatePropertyBag method, which builds up the list of properties to be serialized, just doesn't include the navigation properties, and they don't seem to be enumerated anywhere else apart from being written out as NavigationLinks.
I'm quite new to the ASP.NET world in general and have spent a week or so getting my head around the way things work (particularly with the changes made to OData at the end of 2012 and further changes made so far in 2013).
I suspect that if the ODataEntityTypeSerializer was to be modified (I'm happy to try) to embed this extra information in the appropriate spot (within each navigation link as an nested inline feed as best I can tell) then I'd be set.
Questions:
Have I overlooked something obvious and there's a flag I can set to turn on this behaviour? I can see why, if such a flag exists, it would be off by default (EF lazy loading and this flag wouldn't get on well)
If #1 is no, is there some other ODataEntityTypeSerializer that I could use? If so, how do I switch to it?
If #2 is no, any pointers for where I should start writing my own? Is there a place I can substitute in my own serializer or do I have to maintain my own fork of ASP.NET's Extensions project (as opposed to the Runtime project)
Thanks very much!
$expand is very high on our list of things to support for OData. But as far as I know, we don't have any flag to turn it on server-side. The formatter doesn't currently allow you to substitute your own serializers either. So I'm afraid your only option in the meantime is to create a fork and add support for $expand. If you manage to get it working, please consider sending a pull request our way:
http://aspnetwebstack.codeplex.com/SourceControl/network
You can try it already in webapi nightly builds.
Here is how to install it with nuget:
http://aspnetwebstack.codeplex.com/wikipage?title=Use%20Nightly%20Builds
I created an ASP.NET MVC4 Web API service (REST) with a single GET action. The action currently needs 11 input values, so rather than passing all of those values in the URL, I opted to encapsulate those values into a single class type and have it passed as Content-Body. When I test in Fiddler, I specify the verb as GET, and enter the JSON text in the "Request Body" input box. This works great!
The problem is when I attempt to perform Load Testing in Visual Studio 2010 Ultimate. I am able to specify the GET action and the JSON Content-Body just fine. But when I run the Load test, VS reports exceptions of type ProtocolViolationException (Cannot send a content-body with this verb-type) in the test results. The test executes in 1ms so I suspect the exceptions are causing the test to immediately abort. What can I do to avoid those exceptions? I'd prefer to not change my API to use URL arguments just to work-around the test tooling. If I should change the API for other reasons, let me know. Thanks!
I found it easier to put this answer rather than carry on the discussions.
Sending content with GET is not defined in RFC 2616 yet it has not been prohibited. So as far as the spec is concerned we are in a territory that we have to make our judgement.
GET is canonically used to get a resource. So you are retrieving this resource using this verb with the parameters you are sending. Since GET is both safe and idempotent, it is ideal for caching. Caching usually takes place based on the resource URI - and sometimes based on various headers. The point is cache implementations - AFAIK - would not use the GET content (and to be honest I have not seen any GET with content in real world). And it would not make sense to include the content in the key generation since it reduces the scalability of the caches.
If you have parameters to send, they must be in the URI since this is part of what defines that URI. As such, I strongly believe sending content with GET is wrong.
Even when you look at implementations such as OData, they put the criteria in the URI. I cannot imagine your (or any) applications requirements is beyond OData query requirements.
I'm new with AppFabric Server caching but after playing around with it everything has been working like a dream.
I can add for example datatables to my cache and get that back to use just fine.
I got exited about this functionality and tried to test this with one 3rd party vendors dll that includes login session data (session id, date's etc.)
I created WCF service with method where you consume this dll to login and I store that session to my cache.
This works just fine and I can verify this by looking at statistics of my cache with PowerShell.
Then I created another method that is supposed to pick up this cached session and use it to execute actions. This is where I'm running to the wall.
I can see that I have been able to get session from cache, but information within session object is null (session id, date's...)
I've been serching help for this from everywhere but nobody seams to face this issue.
So my question is
Can AppFabric server cache ALL field values of given object (Public/Non-public not having any role)?
Is there any way to see actual existing content of cache where you would see keys and cached objects with values?
Thanks for all possible comments!
Regards
Mikko
In AppFabric you can only cache objects that are serialisable (or serializable for US readers :-) ). The fact that you have been able to store your session objects in the cache suggests that they are indeed serialisable. But to figure out what's going on here we'll need to probe a little deeper.
By default with binary serialisation, all fields/properties of an object are serialised, public and private (whereas XML serialisation only picks up the public values). We aren't told which flavour of serialisation AppFabric uses, but binary serialisation tends to be more efficient so it's a reasonable assumption that that's what gets used under the covers. However, it's possible to override the serialisation behaviour using the NonSerialized attribute, so that items marked NonSerialized don't make it into the serialised version of the object. The MSDN page for Selective Serialisation specifically advises that security-sensitive information should be marked as nonserializable.
A session ID definitely comes under the heading of security-sensitive information as it's key for session hijacking, so I should say that's the problem you're facing. You could confirm this by having a look inside the 3rd party DLL with ILDasm or Reflector to see if the fields inside the session class are indeed marked as not serialised.
Can you get round this? Well there is, of course, nothing to stop you creating your own Session class that you populate from the 3rd party's object where you keep all the properties serialisable and caching that instead. Bear in mind, however, that you're then essentially doing the very thing they've tried to stop you doing...