Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
Ok this is weird.
If I make a request to a page, where it's text/html, firefox makes one request.
If I make a request to a page, where it's application/xml, firefox makes two requests.
In IE, Google Chrome, it one makes one in both cases.
Any ideas why the two requests, and how I can force just the one?
I've had a similar issue if the encoding of the page didn't match the <meta> tag. If the page was encoded using default windows encoding, but the meta tag specified UTF-8, then firefox would stop downloading once it reached a non-ascii character (e.g. æ,ø or å) and it would redownload the page from the beginning. This would mess up view counts and lots of other logic since the server side script would run twice.
It might be that if you do not start your page with <?xml ?>, but claim that it is, then Firefox will redownload the page again as html (text/html) and process it as html.
Just to add another possibility...
If the html code contains an empty img src attribute, then this causes a 2 http request in both Firefox and Chrome. Currently, those are the ones that follow the standard to the letter, which states that an empty URI reference refers to the absolute base URI.
I've had a similar problem with Firefox. Might help someone.
FF was making two HTTP GET requests while Chrome didn't.
The problem was an empty src="" attribute.
Firefox considers such empty attribs for img/script... tags as the current url and GETs the current page.
Maybe you're making the request in a way that cause HTTP Access Control features to fire?
It is a fairly new standard, and new in [FF3.5][2] that can cause double GET requests.
In case you can sniff the requests server side: see if they contain the Origin: header.
[2]: https://developer.mozilla.org/En/Server-Side_Access_Control Server-Side Access Control
In my case it was a wrong content-type header "image/jpg" sent with PHP-generated image. Double requests gone after I changed the type to "image/jpeg"
More info about this bug...
https://bugzilla.mozilla.org/show_bug.cgi?id=236858
I meet this problem too and I've figured it out.THIS may be related to Non-existent favicon.ico. details here,you can check it using following code(node.js),:
var http = require('http');
server = http.createServer(function (req,res){
console.log(req.url);
res.writeHeader(200,{"Content-Type":"text/html"});
res.end("Hello World");
})
server.listen(8000);
console.log("httpd start #8000");
the result is expected to be:
httpd start #8000
/
/favicon.ico
Found the problem.
The XML packet I was returning had a root node of <feed>
Firefox passes this twice for some reason, maybe as it's trying to identify if this is a valid ATOM/RSS feed. If not, just displays instead?
Changing root node to something else fixed the problem.
Thanks Marcus for starting me in the right direction.
Related
I found that specific client(win7 + IE8) can't download a file(PDF file)
which contains Cache-Control:no-cache in HTTP header;
http://www.doosan.com/doosaniv/download.do?path=product&sav=225806754671.pdf&ori=d70s-5_plus.pdf&dir=20110630
But if the header contains Cache-Control:no-cache="set-cookie, there's no problem to download.
http://www.doosan.com/doosaniv/download.do?path=product&sav=225515770296.pdf&ori=d18s-5.pdf&dir=20110630
And.. in the first situation, If I run IE8 as Administrator, got no problem to download..
(Note that I logon as Administrator in win7. It's weird..)
I fount a blog and it says SSL and no-cache. I think it's similar but different problem.
Thank you.
Thank you for posting this question. The links and examples were very helpful in solving other problems.
From the MSDN article you link to:
"if a user tries to download* a file over a HTTPS connection, any response headers that prevent caching will cause the file download process to fail."
I'm guessing that IE8 doesn't respect Cache-Control:no-cache="set-cookie" as a proper header, and thus believes there is nothing preventing cache and the download is allowed to continue.
Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.
It's the first time I am doing something with headers. I am mainly concerned with Cache-Control but there may be others I will need to check as well. For example, I try to send the following header to the browser (based on tutorials I just read):
Cache-Control:private, max-age=2011-12-30 11:40:56
Google Chrome displays it this way in Network -> Headers -> Response headers, but how do I know if it's correct, that there aren't any typos, syntax errors and such? Will it really work? Will the browser behave like I want it to, or will it treat it like a gibberish (something like "unknown header/value")? I've tried sending nonsensical headers on purpose but they got displayed with the rest. Is there any Chrome tool / addon for that, or any other way? Thank you in advance!
I'm afraid you won't be able to check if the resource has been cached by proxies en route, but you can check if your browser has cached it.
While in the Network panel of Chrome DevTools, hit F5 to reload your page. You should see something like "304 Not Modified" in the status field for the resource you are treating (which means the resource has not been modified and its contents were not received from the server but rather loaded from the browser's cache.)
There are several topics about the problem with cross-domain AJAX. I've been looking at these and the conclusion seems to be this:
Apart from using somthing like JSONP, or a proxy sollution, you should not be able to do a basic jquery $.post() to another domain
My test code looks something like this (running on "http://myTestdomain.tld/path/file.html")
var myData = {datum1 : "datum", datum2: "datum"}
$.post("http://External-Ip:port", myData,function(return){alert(return);});
When I tried this (the reason I started looking), chrome-console told me:
XMLHttpRequest cannot load
http://External-IP:port/page.php. Origin
http://myTestdomain.tld is not allowed
by Access-Control-Allow-Origin.
Now this is, as far as I can tell, expected. I should not be able to do this. The problem is that the POST actually DOES come trough. I've got a simple script running that saves the $_POST to a file, and it is clear the post gets trough. Any real data I return is not delivered to my calling script, which again seems expected because of the Access-control issue. But the fact that the post actually arrived at the server got me confused.
Is it correct that I assume that above code running on "myTestdomain" should not be able to do a simple $.post() to the other domain (External-IP)?
Is it expected that the request would actually arrive at the external-ip's script, even though output is not received? or is this a bug. (I'm using Chrome 11.0.696.60 )
I posted a ticket about this on the WebKit bugtracker earlier, since I thought it was weird behaviour and possibly a security risk.
Since security-related tickets aren't publicly viewable, I'll quote the reply from Justin Schuh here:
This is implemented exactly as required by the spec. For simple cross-origin requests http://www.w3.org/TR/cors/#simple-method> there is no pre-flight check; the request is made and the response cannot be read if the appropriate headers do not authorize the requesting origin. Functionally, this is no different than creating a form and using script to make an off-origin POST (which has always been possible).
So: you're allowed to do the POST since you could have done that anyway by embedding a form and triggering the submit button with javascript, but you can't see the result. Because you wouldn't be able to do that in the form scenario.
A solution would be to add a header to the script running on the target server, e.g.
<?php
header("Access-Control-Allow-Origin: http://your_source_domain");
....
?>
Haven't tested that, but according to the spec, that should work.
Firefox 3.6 seems to handle it differently, by first doing an OPTIONS to see whether or not it can do the actual POST. Firefox 4 does the same thing Chrome does, or at least it did in my quick experiment. More about that is on https://developer.mozilla.org/en/http_access_control
The important thing to note about the JavaScript same-origin policy restriction is that it is something built into modern browsers for security - it is not a limitation of the technology or something enforced by servers.
To answer your question, neither of these are bugs.
Requests are not stopped from reaching the server - this gives the server the option to allow these cross-domain requests by setting the appropriate headers1.
The response is also received back by the browser. Before the use of the access control headers 1, responses to cross-domain requests would be stopped dead in their tracks by a security conscious browser - the browser would receive the response but it would not hand it off to the script. With the access control headers, the server has the option of setting the appropriate headers indicating to a compliant browser that it would like to allow certain origin URLs to make cross domain requests.
The exact behaviour on response might differ between browsers - I can't recall for sure now but I think Chrome calls the success callback function when using jQuery's ajax() but the response is empty. IIRC, Firefox will not invoke the success function.
I get the same thing happening for me. You are able to post across domains but are not able to receive a response. This is what I expected to be able to do and happens for me in Firefox, Chrome, and IE.
One way to kind of get around this caveat is having a local php file with will call the data via curl and respond the response to your javascript. (Kind of restated what you said you knew already.)
Yes, it's correct and you won't be able to do that unless you use any proxy.
No, request won't go to the external IP as soon as there is such limitation.
I am seeing some of the server calls (Used for tracking purpose) in my site getting aborted in Firefox while seeing through HttpFox. This is happening while clicking some link that loads another page in the same window. It works fine with popup. The error type shown is NS_BINDING_ABORTED. I need to know is the tracking call is hitting the server or not.
It works perfectly with Internet Explorer. Is it any problem with the tool? In that case can you suggest any that can be used in Firefox too.
Because your server is not sending HTTP Expires headers, the browser is checking to see if what is in its cache is current.
The way it does this is to send the server a request saying what the date of what it has in the cache is, and the server is sending 304 status telling the client that what it has is current. In other words, the server is not sending the entire content again but instead sending just a short header to say the existing cache content is current.
What you probably need to fix, is to add Expires headers to what you are serving. Then you will see the NS_BINDING_ABORTED message change to (cache), meaning the browser is simply getting content out of its cache, knowing it has not yet expired.
I should add that, when you do a FireFox forced refresh, it assumes that you want to double-check what is in the cache, so it temporarily ignores Expires.
You shouldn't be worried just because you see something that looks like a failure code (NS_BINDING_ABORTED).
In one post a Firefox developer confirms that NS_BINDING_ABORTED is simply an indication that a page load has been stopped.
It seems perfectly normal that opening a page while another page is being loaded cancels the loads on the first page. It doesn't necessarily mean the loads were aborted before the request got sent to the server, which seems to be what you care about.
[edit: reworded & removed the bit about me not being familiar with HttpFox, as people who see this in 2022 are probably not using it anyway.]
What other javascript do you have on the page? Some javascript might be firing causing the request to be aborted.
I noticed the same thing in my application. I was redirecting the page in javascript (window.location = '/some/page.html') but then further down the block of code, I was calling 'window.reload()'. The previous redirection was aborted because window.reload was called.
I don't know what tracking you are using but it's possible that the request is being sent to your server but the request is aborted because another request was issued afterwards.
I have experienced a similar problem, but have identified the cause.
I have a link in the first cell of a table row, and some Javascript that replicates that link across the other TD's of the row. When I click on the 'real' link (in the first cell) I get this unwanted side-effect; when I click on other cells in the row, all is fine. I feel it's because the script is adding a second link to that first cell, when it already has one.
Hence, two instantaneous requests for the same page, with the first being aborted by the second.
This technique is fairly common, so something to look out for.
NS_BINDING_ABORTED error - Best Approach -Using a JavaScript “setInterval” method with the time delay of Min ‘0’ to max ‘100’ milliseconds based on the page load, we can execute our track link request after the default page submit request is processed.
World best solution:
var el = document.getElementById("t");
el.addEventListener("click", avoidNSError, false); //Firefox
function avoidNSError(){
ElementInterval = setInterval(function () {
/* Tracking or other request code goes here */
clearInterval(ElementInterval);
},0);
};
In my case, same NS_BINDING_ABORTED error, but it was because a "button" element, which I clicked to trigger an event, was missing the attribute "type" value "submit" = How to prevent buttons from submitting forms
The error NS_BINDING_ABORTED can have a variety of reasons.
In my case it was garbage in the response headers received from the server, basically a HTTP protocol violation.
Using a web debugging proxy such as Fiddler may sometimes reveal such issues better than the browser's own debugging console (which today does what, I assume, HttpFox did, just better), or at least show more detailed information or clearer error messages.
I know this is a very old question but this happened to me recently with Firefox 95. The images of an ancient application made by a collegue of mine were not loaded (or loaded randomly) because of this code:
window.addEventListener('focus', function() {
// omit other code...
location.reload();
}
Once nested this code into a 'load' listener, the issue completely disappeared.
in my case experience, NS_BINDING_ABORTED occurred because missing closed tag between <form>...</form>
example:
<form name="myform" action="submit.php" method="post">
<div class="myclassinput">
<input type="text" name="firstname">
<input type="text" name="lastname">
<input type="submit" value="Submit">
</form>
there is I am forget to write closing </div> tag before </form>.
I note my experience here, just in case... For me it was a website on a local dev server (adress 192.... etc) which was put online on an already used URL like www.something.com
The consequence was that an MP4 video (through the H5P library) didn't play, but allowed to be scrolled through the progress bar. And when I copy/paste the URL to this video, this NS_BINDING_ABORTED error appeared on my laptop, while my colleague on the same internet connection had no problem to view it.
I made an ipconfig /release and /renew, then restarted my computer, and it was fixed... maybe it was some old data conflict with the previous content on this already used URL domain? I don't know.
For me reason was in Firefox browser preventDefault function not worked in form submit event. This answer helped to solve: https://stackoverflow.com/a/56695472/2097494