Let the mvc-mini-profiler ignore Glimpse requests - mvc-mini-profiler

I'm using mvc-mini-profiler along with Glimpse. The problem is glimse is flooding the profiler output with glimpse requests. Is there any way to ignore all request made by glimpse ?

protected void Application_Start()
{
var ignored = MiniProfiler.Settings.IgnoredPaths.ToList();
ignored.Add("Glimpse.axd");
MiniProfiler.Settings.IgnoredPaths = ignored.ToArray();
}
Solution Posted here:
Mini MVC profiler: appears to be displaying profile times for every static resource

At the moment Glimpse will make Ajax requests if you have the Remote tab selected or when ever an Ajax request is made by your site.
This is done because when we detect that a request is made we proactively get the Glimpse data. We could probably switch this in a future release to be more lazy and only fetch the data on request.
Note, even though this will help, Glimpse will still be calling back to the server in the same way that Mini Profile does. Hence, both frameworks could probably try and ignore each other for ajax requests.
Hope this helps.

Related

HTMLUnit does not process jsonp request

I am trying to crawl my GWT app with HTMLUnit, but for a certain page the desired content is not returned. The GWT page contains a dynamically added javascript which makes a jsonp request to a gae server. I already debugged the server code, and the breakpoint is hit, but at this time the htmlunit code is already finished and the returned content is not complete.
I almost tried all suggested solutions available in stackoverflow, but without any success.
Here is the jsonp request.
http://30.tripstorekrabi.appspot.com/activity?&callback=__gwt_jsonp__.P0.onSuccess
On other pages I use exactly the same kind of call, and there it works fine.
Can anyone help me?
I found a workaround in my GWT code:
Now the jsonp request is executed in a deferred scheduled command:
Scheduler.get().scheduleDeferred(new ScheduledCommand() {
#Override
public void execute() {
activityRegistry.loadActivities(new AsyncCallback<Result>() {
}
Now the javascript function is processed from htmlunit and the desired content is showed.

What does status=canceled for a resource mean in Chrome Developer Tools?

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

MVC3 OutputCache not working on Server and Client as expected

I'm having trouble using the OutputCache attribute in Microsoft's MVC3 framework.
Please imagine the following controller action, which can be used as part of an AJAX call to get a list of products based on a particular manufacturerId:
public JsonResult GetProducts(long manufacturerId)
{
return Json(this.CreateProductList(manufacturerId), JsonRequestBehavior.AllowGet);
}
I want this result to be cached on the server to avoid making excessive database queries. I can achieve this by configuring the attribute thus:
[OutputCache(Duration = 3600, Location = OutputCacheLocation.Server, VaryByParam = "manufacturerId")]
This works as I expected - the browser makes an intitial request which causes the server to create and cache the result, subsequent requests from the same or different browser get the cached version.
But... I also want the browser to cache these results locally; if I filter first on manufacturer X, then Y then go back to X, I don't want it to make another request for X's products - I want it to just use its cached version.
I can make this happen, by changing the OutputCache to this:
[OutputCache(Duration = 3600, Location = OutputCacheLocation.Client)]
Here's the question: how do I combine these so that I can have both sets of behaviour? I tried setting the Location to ServerAndClient but this just made it behave the same as when Location was Server.
I'm sure that the problem has something to do with the "Vary: *" response header I get with ServerAndClient but I don't know how to get rid of it.
I welcome comments about the rationality of my intentions - the fact that I'm not getting the results I expect makes me think I might have some fundamental misunderstanding somewhere...
Many thanks.
PS: This is on a local dev environment, IIS Express from VS2010.
You can use OutputCacheLocation.Any which specifies
The output cache can be located on the browser client (where the
request originated), on a proxy server (or any other server)
participating in the request, or on the server where the request was
processed. This value corresponds to the HttpCacheability.Public
enumeration value.
You may want to also set Cache-control public to in the HTTP header for these requests.
Edit
It seems, depending on your .Net version of the web server you may need to include Response.Cache.SetOmitVaryStar(true); within your controller action to remove the Vary * headers as you suggest.
Details of the reason why in .Net 4 breaking changes release notes.

Ajax call getting canceled by browser

I am using the Prototype JS framework to do Ajax calls. Here is my code:
new Ajax.Request( '/myurl.php', {method: 'post', postBody: 'id='+id+'&v='+foo, onSuccess: success, onFailure: failed} );
function success(ret) {
console.log("success",ret.readyState, ret.status);
}
function failed(ret) {
console.log("failed",ret.readyState, ret.status);
}
Most of the time, this works fine and the success function is called with a status code of 200. About 5% of the time on Safari the success function is called with a status code of 0. In this case, when I look in the Network tab of the web inspector, the ajax call is listed with a status of "canceled". I can confirm with server logs, that the request never hit the server. It's as if the ajax request was immediately canceled without even trying to connect to the server. I have not found any reliable way to reproduce this, it seems to be random. I do it 20 times and it happens once.
Does anyone know what would cause the ajax call to get canceled or return a status code of 0?
The cause may be the combination of http server and browser you are using. It doesn't seems like an error of the PrototypeJS library.
Multiple sources states that keep-alive parameter of the HTTP connection seems to be broken in Safari (see here, here or here). On Apache, they recommend adding this to the configuration:
BrowserMatch "Safari" nokeepalive
(Please check the appropriate syntax in your server documentation).
If Safari handles badly HTTP persistent connections with your server, it may explain what you experiences.
If it's not too complex for you, I would try another HTTP server, there are plenty available on every OS.
We lack a bit of information to answer fully your answer, though. The server issue is a lead but there may be others. It would be nice to know if it does the same thing in other browsers (Firefox with Firebug will display this kind of information, Chrome, Opera and IE have development builtin toolboxes). Another valid question would be how often you execute this AJAX request per second (if relevant).
I know this is an old topic, but I wanted to share a solution for Safari that might save others some time. The following line really solved all problems:
BrowserMatch "^(?=.*Safari)(?=.*Macintosh)(?!.*Chrom).*" nokeepalive gzip-only-text/html
The regex makes sure only Safari on Mac is detected, and not Mobile Safari and Chrome(ium) and such. Safari for Windows is also not matched, but the keepalive problem seems to be a Mac-Safari combination only. In addition, some Safari versions do not handle gzipped css/js well.
All our symptoms of our site crashing or CSS not completley loading in different versions of Safari which caused me to nearly pull my hair out (Safari really is the new IE) have been solved for us with this Apache 'configuration hack'.

NETWORK_ERROR: XMLHttpRequest Exception 101

I am getting this Error
NETWORK_ERROR: XMLHttpRequest Exception 101
when trying to get XML content from one site.
Here is my code:
var xmlhttp;
if(window.XMLHttpRequest) {
xmlhttp = new XMLHttpRequest();
}
if (xmlhttp==null) {
alert ("Your browser does not support XMLHTTP!");
return;
}
xmlhttp.onReadyStateChange=function() {
if(xmlhttp.readyState==4) {
var value =xmlhttp.responseXML;
alert(value);
}
}
xmlhttp.open("GET",url,false);
xmlhttp.send();
//alert(xmlhttp.responseXML);
}
xmlhttp.open("GET",url,false);
xmlhttp.send(null);
Does any one have a solution?
If the url you provide is located externally to your server, and the server has not allowed you to send requests, you have permission problems. You cannot access data from another server with a XMLHttpRequest, without the server explicitly allowing you to do so.
Update: Realizing this is now visible as an answer on Google, I tried to find some documentation on this error. That was surprisingly hard.
This article though, has some background info and steps to resolve. Specifically, it mentions this error here:
As long as the server is configured to allow requests from your web application's origin, XMLHttpRequest will work. Otherwise, an INVALID_ACCESS_ERR exception is thrown
An interpretation of INVALID_ACCESS_ERR seems to be what we're looking at here.
To solve this, the server that receives the request, must be configured to allow the origin. This is described in more details at Mozilla.
The restriction that you cannot access data from another server with a XMLHttpRequest can apply even if the url just implies a remote server.
So:
url = "http://www.myserver.com/webpage.html"
may fail,
but:
url = "/webpage.html"
succeed - even if the request is being made from www.myserver.com
Request aborted because it was cached or previously requested? It seems the XMLHttpRequest Exception 101 error can be thrown for several reasons. I've found that it occurs when I send an XMLHttpRequest with the same URL more than one time. (Changing the URL by appending a cache defeating nonsense string to the end of the URL allows the request to be repeated. -- I wasn't intending to repeat the request, but events in the program caused it to happen and resulted in this exception).
Not returning the correct responseText or responseXML in the event of a repeated request is a bug (probably webKit).
When this exception occurred, I did get an onload event with readyState==4 and the request object state=0 and responseText=="" and responseXML==null. This was a cross domain request, which the server permits.
This was on an Android 2.3.5 system which uses webKit/533.1
Anyone have documentation on what the exception is supposed to mean?
Something like this happened with me when I returned incorrect XML (I put an attribute in the root node). In case this helps anyone.
xmlhttp.open("GET",url, true);
set the async part to true
I found a very nice article with 2 diferent solutions.
The first one implementing jQuery and JSONP, explaining how simple it is.
The second approach, it's redirecting trough a PHP call. Very simple and very nice.
http://mayten.com.ar/blog/42-ajax-cross-domain
Another modern method of solving this problem is Cross Origin Ressource Sharing.
HTML5 offers this feature. You can "wrap" your XMLhttp request in this CORS_request and
if the target browser supports this feature, you can use it and wont have no problems.
EDIT:
Additionaly i have to add that there are many reasons which can cause this Issue.
Not only a Cross Domain Restriction but also simply wrong Settings in your WEB.CONFIG of your Webservice.
Example IIS(.NET):
To enable HTTP access from external sources ( in my case a compiled Phonegap app with CORS request ) you have to add this to your WEB.CONFIG
<webServices>
<protocols>
<add name="HttpGet"/>
<add name="HttpPost"/>
</protocols>
</webServices>
Another scenario:
I got two webservices running... One on Port 80 and one on Port 90. This also gave me an XML HTTP Request Error. I even dont know why :). Nevertheless i think this can help many not well experienced readers.

Resources