AJAX HTTP protocol response problem with custom server - ajax

I've started to add HTTP support to a custom C# non-webserver application which seems to work fine from Firefox/IE/Chrome when typing in the URL directly to the browser - where I can see a returned text string in the page from my application.
The problem is when I try do the same from a HTTPRequest in JavaScript on a web page I don't get a response with Chrome or Firefox (It's fine in IE) - rather I get a status of zero from the HTTPRequest object. I can however see that my application from its debug output received the request from the browser and provided the response so the browser must not be like the response I send in this case with the exception of IE being less picky.
I've swapped between trying different POST and GET requests to no avail - eg:
request.open('GET', url, true);
request.onreadystatechange = mycallback;
//request.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
request.send(null); //tried '' as well and other data with a POST
My simplest server reply I have tried is:
HTTP/1.1 200 OK\r\n
Content-Length: 20\r\n
Content-Type: text/plain\r\n
\r\n
...........
I have tried 1.0 instead of 1.1, different headers such as Connection: Close, Accept-ranges and other random stuff as I tried to mimic other such responses I looked at with Wireshark.
Obviously it must be something simple but the magic combination eludes me!
Many thanks in advance.

And on that note I have answered my own question it was the cross domain security feature.
Which I have now fixed by adding the extra response header:
"Access-Control-Allow-Origin: *"
Hopefully that is useful for someone else in the future!

Related

Mandrill API error for send request

I have a problem while sending a message via Jersey client on Mandrill API. I use Jersey client as follows:
ClientBuilder.newClient()
.register(JacksonJsonProvider.class)
.target(“https://mandrillapp.com/api/1.0/messages/send.json”)
.request(MediaType.APPLICATION_JSON_TYPE)
.post(Entity.json(methodEntity));
Below you can see logged headers, method and content of the API request.
POST https://mandrillapp.com/api/1.0/messages/send.json
Accept: application/json
Content-Type: application/json
{"message":{"subject":"Hello World!","text":"Really, Im just saying hi from Mandrill!","to":[{"email":"marcel#xxxx.com","name":"Marcel cccc","type":"to"}],"headers":{},"tags":["test"],"from_email":"info#xxxxx.com","auto_text":true,"preserve_recipients":false},"async":false,"key":"EWIBVEIOVBVOIEBWIOVEB"}
In response to this request I keep receiving following message:
[{"email":"marcel#XXXX.com","status":"rejected","_id":"0ea5e40fc2f3413ba85b765acdc5f17a","reject_reason":"invalid-sender"}]
I do not know what the issue may be, from some posts I figured out I must use UTF-8 to encode my message and headers. But setting encoding to UTF-8 did not do much good. Otherwise the payload seems fine to me and moreover I found on forums that invalid sender can mean any other kind of issue (not just invalid sender which is sad).
I had exactly same problem with
"reject_reason":"invalid-sender"
You probably check already similar question Mandrill “reject_reason”: “invalid-sender”
Try it if it helps. I realize that you also missing header parameter in your request
e.g. User-Agent: Mandrill-myclient/1.0
Please try also add this parameter to your Jersey Client setup as following:
ClientBuilder.newClient()
.register(JacksonJsonProvider.class)
.target(“https://mandrillapp.com/api/1.0/messages/send.json”)
.request(MediaType.APPLICATION_JSON_TYPE)
.header("User-Agent", "Mandrill-myclient/1.0")
.post(Entity.json(methodEntity));
Does it help?

Different range request response in Firefox and Chrome

I am currently testing some JavaScript that makes a GET request (ie. XMLHttpRequest with "get") with a Range header. Because the request is cross-domain, I'm implementing access control headers in the response as described here:
https://developer.mozilla.org/En/HTTP_access_control#Preflighted_requests
What's confusing me however is that my current server setup is working in Chrome but not in Firefox. Specifically, when I run the JavaScript in Chrome I'm getting back a chunk of the requested data, just like I want. In Firefox however I'm getting error code 501 on request method OPTIONS
At first that seems like the OPTIONS request method needs to be handled by the server, but that works in Chrome so it looks like this is a red herring and something else is wrong. Currently the following response headers are implemented, perhaps this is where the problem lies:
Access-Control-Allow-Headers: Range
Access-Control-Allow-Origin: *
Anyone have any insight in what I need to do? Do Chrome and Firefox handle cross-domain restrictions differently?

How can I get response header via cross-domain ajax?

I'm trying to read documentation and I must confess it is not an easy reading. I have no problem (after adding Access-Control-Allow-Origin header) to read responseText, but fail to get response header anywhere except Firefox.
So, my question is what is the right way to get response header, using cross-domain ajax?
I've tried to use (Access-Control-Expose-Headers), but, again, failed to read header.
So the way it should work is that you specify the headers you want the client to have access to in the Access-Control-Expose-Headers header. For example, if your server sets a Foo response header, and you want the client to be able to read it, your server should also send the following header:
Access-Control-Expose-Headers: Foo
On the client side, you can read all the response headers by calling xhr.getAllResponseHeaders(). This returns the response headers as a string, which you can then parse into an object using the following code: https://gist.github.com/706839
That is an explanation of how things should work. However, note that there is a bug in older browsers where the response headers can't be read on the client. See here for more details: CORS xmlhttprequest HEAD method
I had same problem, and found answer on Chromium mailing list that this is fixed in webkit, and it will be implemented in crhomium ~19.
I will try to find topic and update my answer.

Missing POST Parameters with proxy servers

we encounter some strange behaviour with our web application. Some POST requests do not have any http body, when they should. content-length is 0. There are no post parameters at all. We traced the network traffic at our loadbalancer and we see that we do not get any request body with some of our POST requests.
All broken POST requests have in common that they arrive via a proxy server.
We already found this question on SO:
Why "Content-Length: 0" in POST requests?
We are now using a frame escape javascript routine and it helps a bit. It seems that error rate drops. But we still have POST requests with no data which should never happen in our webapp. These requests does not come from hackers or alike.
Often we saw webwasher as a proxy. But most of the time we do not see which proxy is used.
In this PDF we saw a comment about missing POST parameters with webwasher
WebWasher - Transparent Authentication Guide
Notes on Some Pitfalls
Note that there are some pitfalls that must be taken into account when setting up transparent authentication:
POST requests will fail if the ICAP server sends an redirect to the authentication server. This affects, however, only the renewal of the mapping since for the browser the request was successful, and the POST body will not be sent again after the final redirect.
We would like to know if there is some workaround other than using only GET instead of POST.
We would also here if other sites had problems with missing POST data and which conclusion they made.
Are there any other reasons why POST data is not sent?
I've had issues with Microsoft's proxy server not playing well with web requests.
I've had to resort to forcing HTTP/1.0 and setting the KeepAlive property to false.
There's something about the way NTLM authentication works that causes the body to be sent sporadically.
I've added this to many of my web requests
protected override WebRequest GetWebRequest(Uri uri)
{
HttpWebRequest webRequest = (HttpWebRequest) base.GetWebRequest(uri);
webRequest.KeepAlive = false;
webRequest.ProtocolVersion=HttpVersion.Version10;
return webRequest;
}
Hope this helps!
Not really an answer, I guess, but I arrived here because we had a similar problem. Initially, we thought it was due to the clients being mobile, as this was a common theme, but we have now realised that the common denominator is proxies.
We now raise a http 400 when it happens.
Here are a few of the proxies, we've had issues with. Posting them to lead the casual googler here:
1.1 ACISA02S, 1.1 abc:3328 (squid/2.6.STABLE21)
1.1 ipcop00.cat.local:8000 (squid/2.6.STABLE21)
1.1 PRXTGLSRV01
1.1 ISA
No ones which conform to the spec:
Some HTTP methods MUST cause a cache to invalidate an entity.
...
POST
(the HTTP/1.0 spec states 'Applications must not cache responses to a POST request').
But there are is a LOT of badly written code out there.
What headers do you include in replies to POSTs on the URLs?

Problem with webclient: Expectation failed?

I have a custom Http Handler which manipulates HTTP POST and GET. I got the project working on a seperate isolated server now need to put it in production...
using (var client = new WebClient())
{
client.Credentials = CredentialCache.DefaultCredentials;
client.UploadFile("serverlocation:port", fileToUpload);
}
For some reason now when using client.UploadFile("", file); i.e. forcing the HTTP POST
System.Net.WebException: The remote server returned an error: (417) Expectation failed.
at System.Net.WebClient.UploadFile(Uri address, String method, String fileName)
What could this be? I know the code works, so what else? Maybe the server blocks HTTP POST requests?
I have tried adding:
ServicePointManager.Expect100Continue = false;
But have had no success though i'm not 100% sure where this code should before, I assume before i'm using the WebClient
Edit 0 :
I have just read the following:
Because of the presence of older implementations, the protocol allows
ambiguous situations in which a client may send "Expect: 100-
continue" without receiving either a 417 (Expectation Failed) status
or a 100 (Continue) status. Therefore, when a client sends this
header field to an origin server (possibly via a proxy) from which it
has never seen a 100 (Continue) status, the client SHOULD NOT wait
for an indefinite period before sending the request body.
I believe this request is going through a proxy, which may have something to do with the issue.
Edit 1:
Believe this problem has to be with 100-continue because, using fiddler to see exactly what my application is sending with WebClient.UploadFile shows this:
POST http://XXX.XXX.XXX.XXX:8091/file.myhandledextension HTTP/1.1
Content-Type: multipart/form-data; boundary=---------------------8ccd1eb03f78bc2
Host: XXX.XXX.XXX.XXX:8091
Content-Length: 4492
Expect: 100-continue
Despite having put that line: ServicePointManager.Expect100Continue = false; before the using statement. I don't think this line actually works.
I ended up solving this by putting the ServicePointManager.Expect100Continue = false; in the constructor for the calling WebClient class.
Then I used Fiddler to examine the POST request to ensure Expect: 100-continue was not in the request anymore.

Resources