Debug static file requests from IIS6 - debugging

How can I debug what is being returned by IIS(6) when the response goes through proxies before getting to the browser?
I have requests for static files which are being sent with the 'Accept-encoding: gzip' header. These are being gzipped correctly. However, if a 'Via: ' header (to redirect the response via a proxy) is also included the content is not received gzipped by the browser.
I want to know if the issue is with IIS not applying the compression or related to something the proxy is doing.
How can I investigate this problem?
This is related to IIS6 not doing gzip compression when including Via header in request.

In case anyone else hits this problem i believe it's due to the "HcNoCompressionForProxies" option, configurable in the metabase. It specifically lets you disable compression for proxied requests.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/05f67ae3-fab6-4822-8465-313d4a3a473e.mspx?mfr=true

If your still interested my answer would be install Fiddler probably on the client first. For HTML snooping you can't do much better.
That would be my first port of call.

Related

Prevent truncated HTTP response from being cached

We saw this issue on one of our dev machines - the vendor.js bundle in our Angular project had somehow gotten cached, while truncated, which breaks the web app until you clear the cache.
We do use browser caching (together with URL-hashing so caching doesn't prevent app updates).
Is there any way to prevent the browser from caching a truncated request? Actually, I would have thought that the browser has this built-in (i.e. it won't cache a request where the bytes header does not match the amount downloaded).
The browser where we reproduced the problem was Chrome.
I think I found the issue - for whatever reason, our HTTP Response was missing the "Content-Length" header in the Response Headers.
The response passes through 2 proxies so one of them might remove the "Content-Length" header.
What we did in such a case is to add a parameter for the request of a lib.
You just need to raise the number and next time the browser and the caches in between will fetch a new version from the server:
e.g. www.myserver.com/libs/vendor.js?t=12254565
www.myserver.com/libs/vendor.js?t=12254566

413 request entity too large jetty server

I am trying to make a POST request to an endpoint served using jetty server. The request errors out saying 413 Request entity too large. But the content-length is only 70KB which I see is way below the default limit of 200KB.
I have tried serving via ngnix server and add client_max_body_size to desired level but that didn't work. I have set the setMaxFormContentSize of WebContext and that didn't help either. I have followed https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size and that didn't helped me either.
Does anyone have any solution to offer?
wiki.eclipse.org is OLD and is only for Jetty 7 and Jetty 8 (long ago EOL/End of Life). The giant red box at the top of the page that you linked it even tell you this, and gives you a link to the up to date documentation.
If you see a "413 Request entity too large" from Jetty, then it refers the the Request URI and Request Headers.
Note: some 3rd party libraries outside of Jetty's control can also use HttpServletResponse.sendError(413) which would result in the same response status message as you reported.
Judging by your screenshot, which does not include all of the details, (it's really better to copy/paste the text when making questions on stackoverflow, screenshots often hide details that are critical in getting a direct answer), your Cookie header is massive and is causing the 413 error by pushing the Request Headers over 8k in size.

F5 BIG IP - ajax POST with HTTP response truncated

Jmeter 2.12.
I used a scenario fully functional in front of a reverse proxy Apache. Recently we 've replaced the reverse proxy with the F5 BIGIP technology and now my scenario hangs.
The problem is for a particular ajax POST request the HTTP response is truncated : i receive a 200 OK but the HTML content is not full (no html tags for example). When i post the same request with Firefox the full content is ok.
Note that i don't receive the http header Transfer-Encoding: chunked.
In this case what can be the difference between Firefox and JMETER ?
Anyone have an idea on how could i get the full html response ?
Thanks for any reply.
That completely depends on the settings on your F5 and what exactly you mean with "response is truncated" and "no html tags". Do you get the correct response but the html tags are stripped out? or is the response just truncated so you i.e. only get the first n bytes?
The best way to find out what is actually going wrong is to use something like fiddler in between and try to find the real difference between the responses, especially regarding the response headers (Content-Length, Transfer-Encoding, etc).
When you found the actual difference please post here so we can help you further.
On a sidenote, by any chance do you have some custom coding on the f5 (iRules) which react to different user-agent settings?
Given you send identical requests you should be receiving identical responses.
Use JMeter's View Results Tree listener to inspect request details, or even better compare requests which are being sent by Firefox and JMeter using a lower level network sniffer tool like Wireshark, detect the differences and configure JMeter accordingly to send the same request(s) as Firefox does.
The other reason might be JMeter truncating large response, by default JMeter displays "only" first 10 megabytes in the View Results Tree listener, if this is the case - you can add the next line to user.properties file:
view.results.tree.max_size=0
and restart JMeter to pick the property up - it will suppress response truncation and you will be able to view the full response data.
Alternative way of setting the property is passing it via -J command line argument like:
jmeter -Jview.results.tree.max_size=0 ....
References:
Full list of command-line options
Apache JMeter Properties Customization Guide

How can I validate http response headers?

It's the first time I am doing something with headers. I am mainly concerned with Cache-Control but there may be others I will need to check as well. For example, I try to send the following header to the browser (based on tutorials I just read):
Cache-Control:private, max-age=2011-12-30 11:40:56
Google Chrome displays it this way in Network -> Headers -> Response headers, but how do I know if it's correct, that there aren't any typos, syntax errors and such? Will it really work? Will the browser behave like I want it to, or will it treat it like a gibberish (something like "unknown header/value")? I've tried sending nonsensical headers on purpose but they got displayed with the rest. Is there any Chrome tool / addon for that, or any other way? Thank you in advance!
I'm afraid you won't be able to check if the resource has been cached by proxies en route, but you can check if your browser has cached it.
While in the Network panel of Chrome DevTools, hit F5 to reload your page. You should see something like "304 Not Modified" in the status field for the resource you are treating (which means the resource has not been modified and its contents were not received from the server but rather loaded from the browser's cache.)

Cross domain ajax POST in chrome

There are several topics about the problem with cross-domain AJAX. I've been looking at these and the conclusion seems to be this:
Apart from using somthing like JSONP, or a proxy sollution, you should not be able to do a basic jquery $.post() to another domain
My test code looks something like this (running on "http://myTestdomain.tld/path/file.html")
var myData = {datum1 : "datum", datum2: "datum"}
$.post("http://External-Ip:port", myData,function(return){alert(return);});
When I tried this (the reason I started looking), chrome-console told me:
XMLHttpRequest cannot load
http://External-IP:port/page.php. Origin
http://myTestdomain.tld is not allowed
by Access-Control-Allow-Origin.
Now this is, as far as I can tell, expected. I should not be able to do this. The problem is that the POST actually DOES come trough. I've got a simple script running that saves the $_POST to a file, and it is clear the post gets trough. Any real data I return is not delivered to my calling script, which again seems expected because of the Access-control issue. But the fact that the post actually arrived at the server got me confused.
Is it correct that I assume that above code running on "myTestdomain" should not be able to do a simple $.post() to the other domain (External-IP)?
Is it expected that the request would actually arrive at the external-ip's script, even though output is not received? or is this a bug. (I'm using Chrome 11.0.696.60 )
I posted a ticket about this on the WebKit bugtracker earlier, since I thought it was weird behaviour and possibly a security risk.
Since security-related tickets aren't publicly viewable, I'll quote the reply from Justin Schuh here:
This is implemented exactly as required by the spec. For simple cross-origin requests http://www.w3.org/TR/cors/#simple-method> there is no pre-flight check; the request is made and the response cannot be read if the appropriate headers do not authorize the requesting origin. Functionally, this is no different than creating a form and using script to make an off-origin POST (which has always been possible).
So: you're allowed to do the POST since you could have done that anyway by embedding a form and triggering the submit button with javascript, but you can't see the result. Because you wouldn't be able to do that in the form scenario.
A solution would be to add a header to the script running on the target server, e.g.
<?php
header("Access-Control-Allow-Origin: http://your_source_domain");
....
?>
Haven't tested that, but according to the spec, that should work.
Firefox 3.6 seems to handle it differently, by first doing an OPTIONS to see whether or not it can do the actual POST. Firefox 4 does the same thing Chrome does, or at least it did in my quick experiment. More about that is on https://developer.mozilla.org/en/http_access_control
The important thing to note about the JavaScript same-origin policy restriction is that it is something built into modern browsers for security - it is not a limitation of the technology or something enforced by servers.
To answer your question, neither of these are bugs.
Requests are not stopped from reaching the server - this gives the server the option to allow these cross-domain requests by setting the appropriate headers1.
The response is also received back by the browser. Before the use of the access control headers 1, responses to cross-domain requests would be stopped dead in their tracks by a security conscious browser - the browser would receive the response but it would not hand it off to the script. With the access control headers, the server has the option of setting the appropriate headers indicating to a compliant browser that it would like to allow certain origin URLs to make cross domain requests.
The exact behaviour on response might differ between browsers - I can't recall for sure now but I think Chrome calls the success callback function when using jQuery's ajax() but the response is empty. IIRC, Firefox will not invoke the success function.
I get the same thing happening for me. You are able to post across domains but are not able to receive a response. This is what I expected to be able to do and happens for me in Firefox, Chrome, and IE.
One way to kind of get around this caveat is having a local php file with will call the data via curl and respond the response to your javascript. (Kind of restated what you said you knew already.)
Yes, it's correct and you won't be able to do that unless you use any proxy.
No, request won't go to the external IP as soon as there is such limitation.

Resources