I have the following Mongoose server (the server, not the javascript library):
std::ostringstream oss;
oss << "{ \"key\" : \"value\"}";
mg_printf(conn,
"HTTP/1.1 200 OK\r\n"
"Cache: no-cache\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n",
oss.str().length());
mg_write(conn, oss.str().c_str(), oss.str().length());
When I open the page in Firefox, it works well, I can see the JSON message { "key" : "value"}. Firebug is happy with it, and shows me the interpreted JSON object.
When I access the same URL with $.getJSON("http://127.0.0.1:8080/AtoB", [...] ), Firebug shows me the correct header, but an empty body.
What should I do ?
Thanks
Additional info :
Doesn't work with application/json either. I left text/plain for ease of debugging.
Doesn't work with $.get() or others. The problem is before.
Doesn't work with a raw xmlhttprequest, too !
I tried with a final \0 and a final \n with no luck.
The original mongoose server (mongoose.exe) produces the same behaviour when accessed from jQuery.
So XmlHttpRequest only accepts connections to the same host... I knew that, but competely forgot.
The .html file must be accessed through Mongoose too (same host, same port) instead of using file://
This question really was a duplicate of AJAX response not valid in C++ but Apache
Related
I am writing a lua script to send some data to a webapp I developed using spring-boot from my ESP8266 WeMOS LoLin board. To do so, the script has to authenticate on the webapp first. The problem is when I POST the authentication data, I find in my server logs that even the authentication is done correctly, the session closed.
Here is part of my lua code
print("Authenticating .........")
local url = getBaseUrl() .. '/login'
local body = 'username=' .. config.server.usr .. '&'..
'password=' .. config.server.pwd .. '&' ..
'X-CSRF-TOKEN=a65sd464-6666-4bb4-4543-23k234tl234'
local headers =
'Content-Type: application/x-www-form-urlencoded\r\n'..
'Connection: keep-alive\r\n'..
'Accept: */*\r\n' ..
'Cookie: JSESSIONID=F7A9D7FA7D9AF79D7F9ASD7FA97A979F7D7A'
print(url, "\n", headers, "\n", body)
http.post(url, headers, body, loginPostCallback)
*X-CSRF-TOKEN and JSESSIONID in this example are dummy values. In the full script they are got from the response of a previous GET request
So I tried doing the same operation using curl from command line, and I had no problem at all.
curl -v -H "Content-Type: application/x-www-form-urlencoded\r\nConnection: keep-alive\r\nAccept: */*\r\nJSESSIONID=F7A9D7FA7D9AF79D7F9ASD7FA97A979F7D7A" -b "JSESSIONID=F7A9D7FA7D9AF79D7F9ASD7FA97A979F7D7A" -d "X-CSRF-TOKEN=a65sd464-6666-4bb4-4543-23k234tl234&username=admin&password=admin" http://192.168.1.4:8080/login
Then, I traced on the server the requests, and compared what sends my lua script with what is sent by curl, and I saw lua http.post() was always sending a "Connection: close" header, even when I am explicitly setting the "Connection: keep-alive" header -which is also included-.
Looking at the NodeMCU http library code I saw in http.c, line 224 is always including the "Connection: close" header.
Does anyone know why are they doing it? Is there any way to make "Connection: keep-alive" requests?
Thanks in advance
UPDATE
I've been able to authenticate in the server doing a workaround, using the net library instead of the http and sending Connection: keep-alive headers. Anyway, my questions remain still unanswered so, unless the admins tell me to publish my workaround as a solution and mark the question as resolved, I will leave it open waiting for someone to answer.
At least it's documented but the behavior has been like that from day 1. As I didn't write that code I can only speculate as to the "why". Hence, the question doesn't fit well with the Stack Overflow Q&A style.
The ESP8266 is a very constrained device; memory- and otherwise. Not keeping HTTP connections alive until they time out or are closed by the server, therefore, does make sense.
I'm testing on Windows, trying to simulate POST requests (with different form variables) for load testing. I have tried all kinds of load testing software but failed to get it working.
For GET requests, I know I can just put parameters behind the url
http://www.example.com?id=yyy&t=zzz
But how do I simulate a POST request?
I have a chrome REST Client but I do not know what to put in the headers and data.
Here's what I've tried so far:
class Program
{
static void Main(string[] args)
{
string viewstateid = "/wEPDwUKLTY3NjEyMzE4NWRkK4DxZpjTmZg/RGCS2s13vkEWmwWiEE6v+XrYoWVuxeg=";
string eventid ="/wEdAAoSjOGPZYAAeKGjkZOhQ+aKHfOfr91+YI2XVhP1c/pGR96FYSfo5JULYVvfQ61/Uw4pNGL67qcLo0vAZTfi8zd7jfuWZzOhk6V/gFA/hhJU2fx7PQKw+iST15SoB1LqJ4UpaL7786dp6laCBt9ubQNrfzeO+rrTK8MaO2KNxeFaDhrQ0hxxv9lBZnM1SHtoODXsNUYlOeO/kawcn9fX0BpWN7Brh7U3BIQTZwMNkOzIy+rv+Sj8XkEEA9HaBwlaEjg=";
string username = "user1";
string password = "ttee";
string loginbutton = "Log In";
string URLAuth = "http://localhost/login.aspx";
string postString = string.Format("VIEWSTATE={0}&EVENTVALIDATION={1}&LoginUser_UserName={2}&LoginUser_Password={3}&LoginUser_LoginButton={4}",viewstateid,eventid, username, password,realm,otp,loginbutton);
const string contentType = "application/x-www-form-urlencoded";
System.Net.ServicePointManager.Expect100Continue = false;
CookieContainer cookies = new CookieContainer();
HttpWebRequest webRequest = WebRequest.Create(URLAuth) as HttpWebRequest;
webRequest.Method = "POST";
webRequest.ContentType = contentType;
webRequest.CookieContainer = cookies;
webRequest.ContentLength = postString.Length;
webRequest.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1";
webRequest.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
webRequest.Referer = "http://localhost/login.aspx";
StreamWriter requestWriter = new StreamWriter(webRequest.GetRequestStream());
requestWriter.Write(postString);
requestWriter.Close();
StreamReader responseReader = new StreamReader(webRequest.GetResponse().GetResponseStream());
string responseData = responseReader.ReadToEnd();
Console.WriteLine(responseData);
responseReader.Close();
webRequest.GetResponse().Close();
}
}
It would be helpful if you provided more information - e.g. what OS your using, what you want to accomplish, etc. But, generally speaking cURL is a very powerful command-line tool I frequently use (in linux) for imitating HTML requests:
For example:
curl --data "post1=value1&post2=value2&etc=valetc" http://host/resource
OR, for a RESTful API:
curl -X POST -d #file http://host/resource
You can check out more information here-> http://curl.haxx.se/
EDITs:
OK. So basically you're looking to stress test your REST server? Then cURL really isn't helpful unless you want to write your own load-testing program, even then sockets would be the way to go. I would suggest you check out Gatling. The Gatling documentation explains how to set up the tool, and from there your can run all kinds of GET, POST, PUT and DELETE requests.
Unfortunately, short of writing your own program - i.e. spawning a whole bunch of threads and inundating your REST server with different types of requests - you really have to rely on a stress/load-testing toolkit. Just using a REST client to send requests isn't going to put much stress on your server.
More EDITs
So in order to simulate a post request on a socket, you basically have to build the initial socket connection with the server. I am not a C# guy, so I can't tell you exactly how to do that; I'm sure there are 1001 C# socket tutorials on the web. With most RESTful APIs you usually need to provide a URI to tell the server what to do. For example, let's say your API manages a library, and you are using a POST request to tell the server to update information about a book with an id of '34'. Your URI might be
http://localhost/library/book/34
Therefore, you should open a connection to localhost on port 80 (or 8080, or whatever port your server is on), and pass along an HTML request header. Going with the library example above, your request header might look as follows:
POST library/book/34 HTTP/1.0\r\n
X-Requested-With: XMLHttpRequest\r\n
Content-Type: text/html\r\n
Referer: localhost\r\n
Content-length: 36\r\n\r\n
title=Learning+REST&author=Some+Name
From here, the server should shoot back a response header, followed by whatever the API is programed to tell the client - usually something to say the POST succeeded or failed. To stress test your API, you should essentially do this over and over again by creating a threaded process.
Also, if you are posting JSON data, you will have to alter your header and content accordingly. Frankly, if you are looking to do this quick and clean, I would suggest using python (or perl) which has several libraries for creating POST, PUT, GET and DELETE request, as well as POSTing and PUTing JSON data. Otherwise, you might end up doing more programming than stress testing. Hope this helps!
Postman is the best application to test your APIs !
You can import or export your routes and let him remember all your body requests ! :)
EDIT : This comment is 5 yea's old and deprecated :D
Here's the new Postman App :
https://www.postman.com/
Simple way is to use curl from command-line, for example:
DATA="foo=bar&baz=qux"
curl --data "$DATA" --request POST --header "Content-Type:application/x-www-form-urlencoded" http://example.com/api/callback | python -m json.tool
or here is example how to send raw POST request using Bash shell (JSON request):
exec 3<> /dev/tcp/example.com/80
DATA='{"email": "foo#example.com"}'
LEN=$(printf "$DATA" | wc -c)
cat >&3 << EOF
POST /api/retrieveInfo HTTP/1.1
Host: example.com
User-Agent: Bash
Accept: */*
Content-Type:application/json
Content-Length: $LEN
Connection: close
$DATA
EOF
# Read response.
while read line <&3; do
echo $line
done
This should help if you need a publicly exposed website but you're on a dev pc. Also to answer (I can't comment yet): "How do I post to an internal only running development server with this? – stryba "
NGROK creates a secure public URL to a local webserver on your development machine (Permanent URLs available for a fee, temporary for free).
1) Run ngrok.exe to open command line (on desktop)
2) Type ngrok.exe http 80 to start a tunnel,
3) test by browsing to the displayed web address which will forward and display the local default 80 page on your dev pc
Then use some of the tools recommended above to POST to your ngrok site ('https://xxxxxx.ngrok.io') to test your local code.
https://ngrok.com/ ngrok
Dont forget to add user agent since some server will block request if there's no server agent..(you would get Forbidden resource response) example :
curl -X POST -A 'Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0' -d "field=acaca&name=afadxx" https://example.com
I'm developing an application which is supposed to serve different content for "normal" browser requests and AJAX requests for the same URL requested.
(in fact, encapsulate the response HTML in JSON object if the request is AJAX).
For this purpose, I'm detecting an AJAX request on the server side, and processing the response appropriately, see the pseudocode below:
function process_response(request, response)
{
if request.is_ajax
{
response.headers['Content-Type'] = 'application/json';
response.headers['Cache-Control'] = 'no-cache';
response.content = JSON( some_data... )
}
}
The problem is that when the first AJAX request to the currently viewed URL is made strange things happens on Google Chrome - if, right after the response comes and is processed via JavaScript, user clicks some link (static, which redirects to other page) and then clicks back button in the browser, he sees the returned JSON code instead of the rendered website (logging the server I can say that no request is made). It seems for me that Chrome stores the latest request response for the specific URL, and doesn't take into account that it has different content-type etc.
Is that a bug in the Chrome or am I misusing HTTP protocol ?
--- update 12 11 2012, 12:38 UTC
following PatrikAkerstrand answer, I've found following Chrome bug: http://code.google.com/p/chromium/issues/detail?id=94369
any ideas how to avoid this behaviour?
You should also include a Vary-header:
response.headers['Vary'] = 'Content-Type'
Vary is a standard way to control caching context in content negotiation. Unfortunately it has also buggy implementations in some browsers, see Browser cache vary broken.
I would suggest using unique URLs.
Depending of you framework capabilities you can redirect (302) the browser to URL + .html to force response format and make cache key unique within browser session. Then for AJAX requests you can still keep suffix-less URL. Alternatively you may suffix AJAX URL with .json instead .
Another options are: prefixing AJAX requests with /api or adding some cache boosting query params ?rand=1234.
Setting cache-control to no-store made it in my case, while no-cache didn't. This may have unwanted side effects though.
no-store: The response may not be stored in any cache. Although other directives may be set, this alone is the only directive you need in preventing cached responses on modern browsers.
Source: Mozilla Developer Network - HTTP Cache-Control
I'm trying to use XMLHttpRequest over SSL for a login system. Currently, I'm just testing the capabilities of XMLHttpRequest over SSL to make sure it indeed works. So here's what I'm testing:
Relevant Javascript:
xml_request.open("POST", "https://......", true);
xml_request.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xml_request.setRequestHeader("Content-length", 0);
xml_request.setRequestHeader("Connection", "close");
xml_request.send();
alert(xml_request.reponseText); //displayed using the appropriate onreadystatechange handler
PHP Script:
print json_encode(array(
"text" => "this is text"
));
Now, using http the request works fine; xml_request.responseText holds the JSON encoded string. When I use https, xml_request.responseText is defined, but it's an empty string.
Does anyone why this is and/or how to fix this?
Thanks much,
Dale
Usually, any certificate mismatch will prevent you from connecting. Can you open the site URL with a browser and check for the certificate settings on the server to see if anything is out of the ordinary or giving you warning messages?
One of the request parameters in an http request made by the client contains Japanese characters. If I make this request in Firefox and look at the parameter as soon as it reaches the server by debugging in Eclipse, the characters look fine. If I do the same request using IE 8, the characters get garbled when I look at them at the same point in the server code (they are fine in both browsers, though). I have examined the POST requests made by both browsers, and they both pass the same sequence of characters, which is:
%2C%E3%81%9D%E3%81%AE%E4%BB%96
I am therefore thinking that this has to do with the encoding. If I look at the HTTP headers of the request, I notice the following differences. In IE:
Content-Type: application/x-www-form-urlencoded
Accept: */*
In Firefox:
Content-Type application/x-www-form-urlencoded; charset=UTF-8
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
I'm thinking that the IE 8 header doesn't state the UTF-8 encoding explicitly, even though it's specified in the meta tag of the HTML document. I am not sure if this is the problem. I would appreciate any help, and please do let me know if you need more information.
Make sure the page that contains the form has UTF-8 as charset. In IE's case, the best thing to make sure of this is by sending a HTTP header ('Content-Type: text/html; charset=utf-8') and adding a meta http-equiv tag with the content type/charset to your html (I've seen this actually matter, even when the appropriate header was sent).
Second, your form can also specify the content type:
<form enctype="application/x-www-form-urlencoded; charset=utf-8>