Why "Content-Length: 0" in POST requests? - windows

A customer sometimes sends POST requests with Content-Length: 0 when submitting a form (10 to over 40 fields).
We tested it with different browsers and from different locations but couldn't reproduce the error. The customer is using Internet Explorer 7 and a proxy.
We asked them to let their system administrator see into the problem from their side. Running some tests without the proxy, etc..
In the meantime (half a year later and still no answer) I'm curious if somebody else knows of similar problems with a Content-Length: 0 request. Maybe from inside some Windows network with a special proxy for big companies.
Is there a known problem with Internet Explorer 7? With a proxy system? The Windows network itself?
Google only showed something in the context of NTLM (and such) authentication, but we aren't using this in the web application. Maybe it's in the way the proxy operates in the customer's network with Windows logins? (I'm no Windows expert. Just guessing.)
I have no further information about the infrastructure.
UPDATE: In December 2010 it was possible to inform one administrator about this, incl. links from the answers here. Contact was because of another problem which was caused by the proxy, too. No feedback since then. And the error messages are still there. I'm laughing to prevent me from crying.
UPDATE 2: This problem exists since mid 2008. Every few months the customer is annoyed and wants it to be fixed ASAP. We send them all the old e-mails again and ask them to contact their administrators to either fix it or run some further tests. In December 2010 we were able to send some information to 1 administrator. No feedback. Problem isn't fixed and we don't know if they even tried. And in May 2011 the customer writes again and wants this to be fixed. The same person who has all the information since 2008.
Thanks for all the answers. You helped a lot of people, as I can see from some comments here. Too bad the real world is this grotesque for me.
UPDATE 3: May 2012 and I was wondering why we hadn't received another demand to fix this (see UPDATE 2). Looked into the error protocol, which only reports this single error every time it happened (about 15 a day). It stopped end of January 2012. Nobody said anything. They must have done something with their network. Everything is OK now. From summer 2008 to January 2012. Too bad I can't tell you what they have done.
UPDATE 4: September 2015. The website had to collect some data and deliver it to the main website of the customer. There was an API with an account. Whenever there was a problem they contacted us, even if the problem was clearly on the other side. For a few weeks now we can't send them the data. The account isn't available anymore. They had a relaunch and I can't find the pages anymore that used the data of our site. The bug report isn't answered and nobody complaint. I guess they just ended this project.
UPDATE 5: March 2017. The API stopped working in the summer of 2015. The customer seems to continue paying for the site and is still accessing it in February 2017. I'm guessing they use it as an archive. They don't create or update any data anymore so this bug probably won't reemerge after the mysterious fix of January 2012. But this would be someone else's problem. I'm leaving.

Internet Explorer does not send form fields if they are posted from an authenticated site (NTLM) to a non-authenticated site (anonymous).
This is feature for challange-response situations (NTLM- or Kerberos- secured web sites) where IE can expect that the first POST request immediately leads to an HTTP 401 Authentication Required response (which includes a challenge), and only the second POST request (which includes the response to the challange) will actually be accepted. In these situations IE does not upload the possibly large request body with the first request for performance reasons. Thanks to EricLaw for posting that bit of information in the comments.
This behavior occurs every time an HTTP POST is made from a NTLM authenticated (i.e. Intranet) page to a non-authenticated (i.e. Internet) page, or if the non-authenticated page is part of a frameset, where the frameset page is authenticated.
The work-around is either to use a GET request as the form method, or to make sure the non-authenticated page is opened in a fresh tab/window (favorite/link target) without a partly authenticated frameset. As soon as the authentication model for the whole window is consistent, IE will start to send form contents again.
Definitely related: http://www.websina.com/bugzero/kb/browser-ie.html
Possibly related: KB923155
Full Explanation: IEInternals Blog – Challenge-Response Authentication and Zero-Length Posts

This is easy to reproduce with MS-IE and an NTLM authentication filter on server side. I have the same issue with JCIFS (1.2.), struts 1. and MS-IE 6/7 on XP-SP2. It was finally fixed. There are several workarounds to make it up.
change form method from POST (struts default setting) to GET.
For most pages with small sized forms, it works well. Unfortunately i have possibly more than 50 records to send in HTTP stream back to server side. IE has a GET URL limit 2038 Bytes (not parameter length, but the whole URL length). So this is a quick workaround but not applicable for me.
send a GET before POST action executing.
This was recommended in MS-KB. My project has many legacy procedures and i would not take the risk at the right time. I have never tried this because it still needs some extra authentication processing when GET is received by filter layer based on my understanding from MS-KB and I would not like to change the behavior with other browsers, e.g. Firefox, Opera.
detecting if POST was sent with zero content-length (you may get it from header properties hash structure with your framework).
If so, trigger an NTLM authentication cycle by get challenge code from DC or cache and expect an NTLM response.
When the NTLM type2 msg is received and the session is still valid, you don't really need to authenticate the user but just forward it to the expected action if POST content-length is not zero. BTW, this would increase the network traffics. So check your cache life time setting and SMB session soTimeOut configuration before applying the change plz.
Or, more simple, you may just send a 401-unauthorized status to MS-IE and the browser shall send back POST request with data in reply.
MS-KB has provided a hot-fix with KB-923155 (I could not post more than one link because of a low reputation number :{ ) , but it seems not working. Would someone post a workable hot-fix here? Thanks :) Here is a link for reference, http://www.websina.com/bugzero/kb/browser-ie.html

We have a customer on our system with exactly the same problem. We've pin pointed it down to the proxy/firewall. Microsoft's IAS. It's stripping the POST body and sending content-length: 0. Not a lot we can do to work around however, and down want to use GET requests as this exposes usernames/passwords etc on the URL string. There's nearly 7,000 users on our system and only one with the problem... also only one using Microsoft IAS, so it has to be this.

There's a good chance the problem is that the proxy server in between implements HTTP 1.0.
In HTTP 1.0 you must use the Content-Length header field: (See section 10.4 here)
A valid Content-Length is required on
all HTTP/1.0 POST requests. An
HTTP/1.0 server should respond with a
400 (bad request) message if it cannot
determine the length of the request
message's content.
The request going into the proxy is HTTP 1.1 and therefore does not need to use the Content-Length header field. The Content-Length header is usually used but not always. See the following excerpt from the HTTP 1.1 RFC S. 14.13.
Applications SHOULD use this field to
indicate the transfer-length of the
message-body, unless this is
prohibited by the rules in section
4.4.
Any Content-Length greater than or
equal to zero is a valid value.
Section 4.4 describes how to determine
the length of a message-body if a
Content-Length is not given.
So the proxy server does not see the Content-Length header, which it assumes is absolutely needed in HTTP 1.0 if there is a body. So it assumes 0 so that the request will eventually reach the server. Remember the proxy doesn't know the rules of the HTTP 1.1 spec, so it doesn't know how to handle the situation when there is no Content-Length header.
Are you 100% sure your request is specifying the Content-Length header? If it is using another means as defined in section 4.4 because it thinks the server is 1.1 (because it doesn't know about the 1.0 proxy in between) then you will have your described problem.
Perhaps you can use HTTP GET instead to bypass the problem.

This is a known problem for Internet explorer 6, but not for 7 that I know of. You can install this fix for the IE6 KB831167 fix.
You can read more about it here.
Some questions for you:
Do you know which type of proxy?
Do you know if there is an actual body sent in the request?
Does it happen consistently every time? Or only sometimes?
Is there any binary data sent in the request? Maybe the data starts with a \0 and the proxy has a bug with binary data.

If the user is going through an ISA proxy that uses NTLM authentication, then it sounds like this issue, which has a solution provided (a patch to the ISA proxy)
http://support.microsoft.com/kb/942638POST requests that do not have a POST body may be sent to a Web server that is published in ISA Server 2006

I also had a problem where requests from a customer's IE 11 browser had Content-Length: 0 and did not include the expected POST content. When the customer used Firefox, or Chrome the expected content was included in the request.
I worked out the cause was the customer was using a HTTP URL instead of a HTTPS URL (e.g. http://..., not https://...) and our application uses HSTS. It seems there might be a bug in IE 11 that when a request gets upgraded to HTTPS due to HSTS the request content gets lost.
Getting the customer to correct the URL to https://... resulted in the content being included in the POST request and resolved the problem.
I haven't investigated whether it is actually a bug in IE 11 any further at this stage.

Are you sure these requests are coming from a "customer"?
I've had this issue with bots before; they sometimes probe sites for "contact us" forms by sending blank POST requests based on the action URI in FORM tags they discover during crawling.

Presence and possible values of the ContentLength header in HTTP are described in the HTTP ( I assume 1/1) RFC:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.13
In HTTP, it SHOULD be sent whenever the message's length can be determined prior to being transferred
See also:
If a message is received with both a
Transfer-Encoding header field and a Content-Length header field,
the latter MUST be ignored.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4
Maybe your message is carrying a Transfer-Encoding header?
Later edit: also please note "SHOULD" as used in the RFC is very important and not equivalent to "MUST":
3. SHOULD This word, or the adjective "RECOMMENDED", mean that there
may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course.
Ref: http://www.ietf.org/rfc/rfc2119.txt

We had a customer using same website in anonymous and NTLM mode (on different ports). We found out that in our case the 401 was related to Riverbed Steelhead application used for http optimization. The first signal pointing us into that direction was a X-RBT-Optimized-By header. The issue was the Gratuitous 401 feature:
This feature can be used with both per-request and per-connection
authentication but it‘s most effective when used with per-request
authentication. With per-request authentication, every request must be
authenticated against the server before the server would serve the
object to the client. However, most browsers do not cache the server‘s
response requiring authentication and hence it will waste one
round-trip for every GET request. With Gratuitous 401, the client-side
Steelhead appliance will cache the server response and when the client
sends the GET request without any authentication headers, it will
locally respond with a ―401 Unauthorized‖ message and therefore saving
a round trip. Note that the HTTP module does not participate in the
actual authentication itself. What the HTTP module does is to inform
the client that the server requires authentication without requiring
it to waste one round trip.

Google also shows this as an IE (some versions, anyway) bug after an https connection hits the keepalive timeout and reconnects to the server. The solution seems to be configuring the server to not use keepalive for IE under https.

Microsoft's hotfix for KB821814 can set Content-Length to 0:
The hotfix that this article describes implements a code change in Wininet.dll to:
Detect the RESET condition on a POST request.
Save the data that is to be posted.
Retry the POST request with the content length set to 0. This prevents the reset from occurring and permits the authentication process to complete.
Retry the original POST request.

curl sends PUT/POST requests with Content-Length: 0 when configured to use HTTP proxy. It's trick to overcome required buffering in case of first unauthorized PUT/POST request to proxy. In case of GET/HEAD requests curl simply repeats the query. The scheme for PUT/POST is like:
Send first PUT/POST request with Content-Length set to 0.
Get answer. HTTP status code of 407 means we have to use proxy
authorization. Prepare headers for proxy authentication for send request.
Send request again with filled headers for proxy authentication and real data to POST/PUT.

Related

Firefox 68.0 update causes API calls to return 403 after CSP report-uri POST requests

After the most recent Firefox update (68.0), I am having problems with persistent session data.
When a user logs in, as the page loads, there are various expected CSP violations that send a POST request containing the violation report to the path report-uri directive contains.
Subsequent API GET requests to retrieve user data returns a 403 Forbidden, which (by design) redirects the user back to the login page. Since the user is logged in already, same API requests are sent that result in another 403, which leads to an infinite loop until after an arbitrary number of loops API requests return 200 OK.
All requests (both POST and GET) before and after the update are the same.
It seems to me that the fact that there are CSP report POST requests before the API requests changes something related to the session, which is used by the back-end to determine if the user has the correct privileges.
Could Firefox have changed something about the way it handles CSP report-uri requests or their responses change with the update?
What would be a good way to approach this problem?
Firefox has just been updated to version 68.0.1. The update seems to have fixed this problem. Release notes don't seem to be related to this in a way I can make sense, but regardless, the problem is solved.

Do browsers allow cross-domain requests to be "sent"?

I am newbie to website security and currently trying to understand Same-Origin-Policy in some depth.
While there are very good posts on stackoverflow and elsewhere about the concept of SOP, I could not find updated information on whether chrome and other browsers allow cross-domain XHR post requests to be 'sent' from the first place.
From this 5 year old post, it appears that chrome allows the request to pass through to the requested server but does not allow reading the response by the requester.
I tested that on my website trying to change user info on my server from a different domain. Details below:
My domain: "www.mysite.com"
Attacker domain: "www.attacker.mysite.com"
According to Same-Origin-Policy those two are considered different Origins.
User (while logged in to www.mysite.com) opens www.attacker.mysite.com and presses a button that fires a POST request to 'www.mysite.com' server...The submitted hidden form (without tokens in this case) has all the required information to change the user's info on 'www.mysite.com' server --> Result: CSRF successful attack: The user info does indeed change.
Now do the same but with javascript submitting the form through JQuery .post instead of submitting the form--> Result: Besides chrome giving the normal response:
No 'Access-Control-Allow-Origin' header is present on the requested
resource
, I found that no change is done on the server side...It seems that the request does not even pass through from the browser. The user info does not change at all! While that sounds good, I was expecting the opposite.
According to my understanding and the post linked above, for cross-domain requests, only the server response should be blocked by the browser not sending the post request to the server from the first place.
Also, I do not have any CORS configuration set; no Access-Control-Allow-Origin headers are sent. But even if I had that set, that should apply only on 'reading' the server response not actually sending the request...right?
I thought of preflights, where a request is sent to check if it's allowed on the server or not, and thus blocking the request before sending its actual data to change the user info. However, according to Access_Control_CORS , those preflights are only sent in specific situations which do not apply to my simple AJAX post request (which includes a simple form with enctype by default application/x-www-form-urlencoded and no custom headers are sent).
So is it that chrome has changed its security specs to prevent the post request to a cross domain from the first place?
or am I missing something here in my understanding to the same-origin-policy?
Either way, it would be helpful to know if there is a source for updated security measures implemented in different web browsers.
The XMLHttpRequest object behavior has been revisited with time.
The first AJAX request were unconstrained.
When SOP was introduced the XMLHttpRequest was updated to restrict every cross-origin request
If the origin of url is not same origin with the XMLHttpRequest origin the user agent should raise a SECURITY_ERR exception and terminate these steps.
From XMLHttpRequest Level 1, open method
The idea was that an AJAX request that couldn't read the response was useless and probably malicious, so they were forbidden.
So in general a cross-origin AJAX call would never make it to the server.
This API is now called XMLHttpRequest Level 1.
It turned out that SOP was in general too strict, before CORS was developed, Microsoft started to supply (and tried to standardize) a new XMLHttpRequest2 API that would allow only some specific requests, stripped by any cookie and most headers.
The standardization failed and was merged back into the XMLHttpRequest API after the advent of CORS. The behavior of Microsoft API was mostly retained but more complex (read: potentially dangerous) requests were allowed upon specific allowance from the server (through the use of pre-flights).
A POST request with non simple headers or Content-Type is considered complex, so it requires a pre-flight.
Pre-flights are done with the OPTIONS method and doesn't contain any form information, as such no updates on the server are done.
When the pre-flight fails, the user-agent (the browser) terminate the AJAX request, preserving the XMLHttpRequest Level 1 behavior.
So in short: For XMLHttpRequest the SOP was stronger, deny any cross-origin operations despite the goals stated by the SOP principles. This was possible because at the time that didn't break anything.
CORS loosened the policy allowing "non harmful" requests by default and allowing the negotiation of the others.
OK...I got it...It's neither a new policy in chrome nor missing something in SOP...
The session cookies of "www.mysite.com" were set to "HttpOnly" which means, as mentioned here, that they won't be sent along with AJAX requests, and thus the server won't change the user's details in point (4).
Once I added xhrFields: { withCredentials:true } to my post request, I was able to change the user's information in a cross-domain XHR POST call as expected.
Although this proves the already known fact that the browser actually sends the cross-domain post requests to the server and only blocks the server response, it might still be helpful to those trying to deepen their understanding to SOP and/or playing with CORS.

firefox repeating text/html GET http requests to the web app

Via Firefox, if I do a GET text/html request to my web app, I get a 200 response back, and then Firefox sends 3 more of the same request right afterward. All return 200s. Does anyone know what would cause this?
*Some other observations about the issue:
In Firebug's network tab, only one request shows up. I can only see the extra requests using Tamper Data or another tool that sees the Http requests sent from my browser.
This issue does not happen in prior versions of my web app. When I compare the responses that get returned by the two different versions of the web app, I can't see anything that would cause this issue (but then, I really don't know what to look for). The responses are identical except for the web app's cookies, which are different.
This issue happens with JavaScript enabled or disabled.
Something similar is happening with Chrome, though it seems to be sending only 2 extra requests.
I don't see any browser redirects in the Html header.
This is only happening with text/html requests, not css requests, for example.
All 4 responses returned seem to have the complete Html page in the body, and they also have the cookie that the web app uses.
In Tamper Data, the 'Load Flags' column (whatever that is) says the following: First request is VALIDATE_ALWAYS_LOAD_DOCUMENT_URI LOAD_INITIAL_DOCUMENT_URI; second and third requests are LOAD_NORMAL; fourth request is LOAD_FROM_CACHE VALIDATE_NEVER
I don't see it happening with POSTs
It does not happen when the response is a 302.
If I go into the firefox config and set network.http.max-connections-per-server to 1, then Firefox only sends one request (the issue does not occur). (I don't think I can ask all our users to do that. :-))
*Why this issue is a problem:
This site has been around a long time and wasn't designed for this behavior. It's probably not going to go well.
(edited to add new findings)

Missing POST Parameters with proxy servers

we encounter some strange behaviour with our web application. Some POST requests do not have any http body, when they should. content-length is 0. There are no post parameters at all. We traced the network traffic at our loadbalancer and we see that we do not get any request body with some of our POST requests.
All broken POST requests have in common that they arrive via a proxy server.
We already found this question on SO:
Why "Content-Length: 0" in POST requests?
We are now using a frame escape javascript routine and it helps a bit. It seems that error rate drops. But we still have POST requests with no data which should never happen in our webapp. These requests does not come from hackers or alike.
Often we saw webwasher as a proxy. But most of the time we do not see which proxy is used.
In this PDF we saw a comment about missing POST parameters with webwasher
WebWasher - Transparent Authentication Guide
Notes on Some Pitfalls
Note that there are some pitfalls that must be taken into account when setting up transparent authentication:
POST requests will fail if the ICAP server sends an redirect to the authentication server. This affects, however, only the renewal of the mapping since for the browser the request was successful, and the POST body will not be sent again after the final redirect.
We would like to know if there is some workaround other than using only GET instead of POST.
We would also here if other sites had problems with missing POST data and which conclusion they made.
Are there any other reasons why POST data is not sent?
I've had issues with Microsoft's proxy server not playing well with web requests.
I've had to resort to forcing HTTP/1.0 and setting the KeepAlive property to false.
There's something about the way NTLM authentication works that causes the body to be sent sporadically.
I've added this to many of my web requests
protected override WebRequest GetWebRequest(Uri uri)
{
HttpWebRequest webRequest = (HttpWebRequest) base.GetWebRequest(uri);
webRequest.KeepAlive = false;
webRequest.ProtocolVersion=HttpVersion.Version10;
return webRequest;
}
Hope this helps!
Not really an answer, I guess, but I arrived here because we had a similar problem. Initially, we thought it was due to the clients being mobile, as this was a common theme, but we have now realised that the common denominator is proxies.
We now raise a http 400 when it happens.
Here are a few of the proxies, we've had issues with. Posting them to lead the casual googler here:
1.1 ACISA02S, 1.1 abc:3328 (squid/2.6.STABLE21)
1.1 ipcop00.cat.local:8000 (squid/2.6.STABLE21)
1.1 PRXTGLSRV01
1.1 ISA
No ones which conform to the spec:
Some HTTP methods MUST cause a cache to invalidate an entity.
...
POST
(the HTTP/1.0 spec states 'Applications must not cache responses to a POST request').
But there are is a LOT of badly written code out there.
What headers do you include in replies to POSTs on the URLs?

Eradicating 401 "Unauthorised" responses followed by 200 "Ok" responses

I’ve got a situation with a large internal corporate web based application running ASP.NET 3.5 on IIS6 generating 401 “Unauthorised” responses followed by 200 “Ok” responses (as profiled by Fiddler). I’m aware of why this happening (integrated auth forcing the browser to resend credentials) but I’m looking for some thoughts on how to minimise or eradicate the situation. The application in question is running in the WAN with some users experiencing latency of up to 250ms so forcing a subsequent request can have a noticeable impact on page load time, particularly when there are a number of cascading drop down lists on the pages making.
The users of the application are internal within a managed desktop environment so mechanisms to force the browser to send credentials on the first request (is this even possible?) could be possible from a deployment perspective. This would work for pages requiring the user’s identity but for resources not requiring authentication (WebResource.axd, ScriptResource.axd and some custom web services), allowing anonymous auth would be possible. I’ve looked at defining this on a per location basis in the web.config but the results were mixed (still a number of 401 responses).
I’d appreciate any guidance on a “best practice” for dealing with this situation. There are a lot of resources out there identifying the problem but none that I’ve found providing a feasible solution.
Thanks!
Edit: Resources not requiring authentication (i.e. web services used for cascading drop down lists) can be requested anonymously through adding a location entry to the web config but I'm yet to find an answer for authenticated resources.
Unfortunately this is an artifact of the HTTP NTLM authentication scheme.
In short, the browser (Internet Explorer or otherwise) doesn't know that it needs to authenticate at all until it gets bounced with a 401 response containing a WWW-Authenticate response header.
In the case of WWW-Authenticate: NTLM -- annoyingly enough -- it requires two 401 responses on a single persistent connection to complete, and this process must be repeated once the HTTP persistent connection is closed. So even if you were able to get the browser to initiate a request blindly attempting NTLM, at least one 401 response cannot be removed from the transaction.
I think your best bet would be to maximize the amount of time that persistent connections are left open when idle.
CSCRIPT.EXE c:\inetpub\adminscripts\ADSUTIL.VBS SET W3SVC/AuthPersistSingleRequest FALSE
Will reduce the amount of 401's significantly.
I believe you can convince Firefox to automatically send NTLM credentials to a whitelisted set of domains via "about:config" settings - use the "network.automatic-ntlm-auth.trusted-uris" setting. I haven't tried this myself though. I'm not sure there's any equivalent for Internet Explorer.
Unfortunately if you're using something else like Kerberos there does not seem to be a way to avoid the 401.
You may need to consider Forms Authentication if the 401-induced latency is too long. The users would have to explicitly log in, but just once. Then you could use a cookie or cookieless scheme and get a response on the first try.
I imagine that page load would be slow if you have cascading drop-downs and your initial page load populates one value that causes a POST to get the next list, set that value, another POST to get the next list again, and so on. If this is the situation, perhaps you need to populate all those drop-downs on the first round-trip rather than waiting for POST responses.
TL;DR I put HTTP header information in HTTP body
My example is in Angular, but any TypeScript/JavaScript (framework) might have the same issue.
When doing a HTTP post call to my backend API, which requires headers with the logged in user information, I added my HTTP headers where my HTTP body should be and the headers were empty.
PROBLEM
markInstructionAsCompleted(visitScheduleId: string, instructionId: number) {
return this.http.post(`${environment.apiUrl}/VisitInstructions/schedule/${visitScheduleId}/done/${instructionId}`, this.getHeaderWithAuthorization());
}
SOLUTION, notice that there's an added second argument to the HTTP post call, which is null
markInstructionAsCompleted(visitScheduleId: string, instructionId: number) {
return this.http.post(`${environment.apiUrl}/VisitInstructions/schedule/${visitScheduleId}/done/${instructionId}`, null, this.getHeaderWithAuthorization());
}

Resources