Size limits on Get/POST parameters? - g-wan

I noticed once the length of a GET parameter is longer than 400 bytes, or 2000 bytes for a POST parameter, G-WAN returns 400 error. Is there any way to increase the limits?
And when I try the csp_entity.html, I keep get 400 error as well for the file size less than the entity size limit. (413 error for the exceeding files, which is expected)
Note: the latest G-WAN in ubuntu 12.10.

Two examples are on their way, one with streaming and one without. They will both serve the purpose of showing how to proceed and to test the implementation.
Everyone having the same codebase, reporting issues (if any) will be easier.
Note that v4.25 has been shipped this morning. It does the same things done by v4.24, but v4.25 corrects some compiler optimizations that have generated bugs for a few users on some platforms (empty files, empty dates).

Related

How to work around 100K character limit for the StanfordNLP server?

I am trying to parse book-length blocks of text with StanfordNLP. The http requests work great, but there is a non-configurable 100KB limit to the text length, MAX_CHAR_LENGTH in StanfordCoreNLPServer.java.
For now, I am chopping up the text before I send it to the server, but even if I try to split between sentences and paragraphs, there is some useful coreference information that gets lost between these chunks. Presumably, I could parse chunks with large overlap and link them together, but that seems (1) inelegant and (2) like quite a bit of maintenance.
Is there a better way to configure the server or the requests to either remove the manual chunking or preserve the information across chunks?
BTW, I am POSTing using the python requests module, but I doubt that makes a difference unless a corenlp python wrapper deals with this problem somehow.
You should be able to start the server with the flag -maxCharLength -1 and that'll get rid of the sentence length limit. Note that this is inadvisable in production: arbitrarily large documents can consume arbitrarily large amounts of memory (and time), especially with things like coref.
The list of options to the server should be accessible by calling the server with -help, and are documented in code here.

RabbitMQ Message size limitiation?

I am trying to gauge the performance of RabbitMQ when my message size increases to a few MB. However, even when I sent a 32KB message, I get a Resource temporarily unavilable message from the Server. There's no error in the log files, there are no memory limit reaching errors... How do I go about debugging this issue?
If it's on any help, I'm running this on EC2 T1.micro instance.. So 592MB RAM.
According to the bug you linked, someone recently (looks like after you left the link to the bug) left a comment that they can reliably reproduce the bug when the message size is >=15821 bytes.
I would recommend that you see if that also holds true for you -- i.e. can you also reproduce at that threshold -- and then evaluate if under that amount -- thus avoiding the bug documented in the issue above -- is a sufficient size for your needs. If not, you may want to try pika (https://github.com/pika/pika) and see if that works better with larger messages (one of the other comments on that bug suggests that pika did work for them with larger message sizes).
Another option that may work, depending on your exact use case, would be to include in the rabbitmq message payload a key of sorts that points allows you to fetch the large blob of data from wherever it's stored (Postgres, MongoDB, etc.) when you consume the message, and therefore allow you to avoid the bug. Perhaps not ideal if you really want to encapsulate everything inside the payload, but may be a feasible workaround to the bug.
In terms of debugging, since it appears that this is a bug with rabbitpy itself, I think you would need to debug the actual rabbitpy library if you wanted to proceed on that front. Doable, but perhaps not feasible due to time, etc.

Should the length of a URL string be limited to increase security?

I am using ColdFusion 8 and jQuery 1.7.2.
I am using CFAJAXPROXY to pass data to a CFC. Doing so creates a JSON array (argument collection) and passes it through the URL. The string can be very long, since quite a bit of data is being passed.
The site that I am working has existing code that limits the length of any URL query string to 250 characters. This is done in the application.cfm file by testing the length of the query string. If any query string is great than 250 characters, the request is aborted. The purpose of this was to ensure that hackers or other malicious code wouldn't be passed through the URL string.
Now that we are using the query string to pass JSON arrays in the URL, we discovered that the Ajax request was being aborted quite frequently.
We have many other security practices in place, such as stripping any "<>" tags from code and using CFQUERYPARAM.
My question is whether limiting the length of a URL string for the sake of security a good idea or is simply ineffective?
There is absolutely no correlation between URI length and security rather more a question of:
Limiting the amount of information that you provide to a user agent to a 'Need to know basis'. This covers things such as the type of application server you run and associated conventions, the web server you run and associated conventions and the operating system on the host machine. These are essentially things that can be considered vulnerabilities.
Reducing the impact of exploiting those vulnerabilities i.e introducing patches, ensuring correct configuration etc.
As alluded to above, at the web tier, this doesn't only cover GET's (your concern), but also POST's, PUT's, DELETE's on just about any other operation on a HTTP resource.
Moved this into an answer for Evik -
That seems (at best) completely unnecessary if the inputs are being properly sanitized. I'm sure someone clever can quickly defeat a "security by small doorway" defense, assuming that's the only defense.
OWASP has some good, sane guidelines for web security. As far as I've read, limiting the size of the url is not on the list. For more information, see: https://www.owasp.org/index.php/Top_10_2010-Main
I would also like to echo Hereblur's comment that this makes internationalization tricky, or maybe impossible.
I'm not a ColdFusion developer. But I think it's the same with other language.
I think It's help just a little bit. The problem of malicious code or sql injection should be handle by your application.
I agree that limited length of query string value is safer and add more difficult to hackers. But you cant do this with POST data. and It's limit some functionality. For example,
For one utf-8 character, It may take 9 characters after encoded. that's mean you can put only 27 non-english characters.
The only reason to limit has to do with performance and DOS attack - not security per se (though DOS is a security threat by bringing down your server). Web servers and App servers (including CF) allow you to limit the size of POST data so that your server won't be degraded by very large file uploads. URL data if substantial can result in long running requests as the server struggles to parse or handle or write or whatever.
So there is some modest risk here related to such things. Back in the NT days IIS 3 had a number of flaws that were "locked down" by limiting the length of the URL - but those days are long gone. There are far more exploits representing low hanging fruit that I would look at first before examining this issue too closely - unless of course you feel like you are hanging a specific problem with folks probing you (with long URLs I mean :).

When sending a file via AJAX, does it get read into memory first?

I'm writing an uploader that has to be able to transmit files of any size (up to 30gigs) to the server.
My original intention was to write a java applet that would break the file up into pieces, send those to the server, and then reassemble them there.
However, someone has suggested that AJAX's XMLHttpRequest can do the job in conjunction with nsIFileInputStream
(example here: https://developer.mozilla.org/en/using_xmlhttprequest#Sending_files_using_a_FormData_object )
and by using PUT instead of POST.
I'm worried about 2 things and can't seem to find the answer.
1) Will AJAX attempt to read the file into memory before sending it (that obviously would break the whole thing)
[EDIT]
This http://www.codeproject.com/KB/ajax/AJAXFileUpload.aspx?msg=2329446 example explicitly states that they're using ActiveXObject because that DOESN'T load the file into memory... which suggests to me that XMLHttpRequest would load it into memory. I'm surprised I'm having such a hard time finding this info, to be honest.
2) How reliable is this approach. I realize that if the connection just dies the upload would have to resume from scratch, but realistically, how likely is it that using a standard cable connection with an upload throttle of about .5MB/s that a 30 gig file would arrive at the server?
I'm trying something similar using File Api and blob.slice, but it turned out to clock up memory on large files.. However, you could use Google Gears, which plays much better with large sliced files. It also doesnt cause errors with the slice order, which FileReader combined with XHR does frequently and randomly.
I do however find (generally) that uploading files via JavaScript is very unstable..

Can HTTP headers be too big for browsers?

I am building an AJAX application that uses both HTTP Content and HTTP Header to send and receive data. Is there a point where the data received from the HTTP Header won't be read by the browser because it is too big ? If yes, what is the limit and is it the same behaviour in all the browser ?
I know that theoretically there is no limit to the size of HTTP headers, but in practice what is the point that past that, I could have problem under certain platform, browsers or with certain software installed on the client computer or machine. I am more looking into a guide-line for safe practice of using HTTP headers. In other word, up to what extend can HTTP headers be used for transmitting additional data without having potential problem coming into the line ?
Thanks, for all the input about this question, it was very appreciated and interesting. Thomas answer got the bounty, but Jon Hanna's answer brought up a very good point about the proxy.
Short answers:
Same behaviour: No
Lowest limit found in popular browsers:
10KB per header
256 KB for all headers in one response.
Test results from MacBook running Mac OS X 10.6.4:
Biggest response successfully loaded, all data in one header:
Opera 10: 150MB
Safari 5: 20MB
IE 6 via Wine: 10MB
Chrome 5: 250KB
Firefox 3.6: 10KB
Note
Those outrageous big headers in Opera, Safari and IE took minutes to load.
Note to Chrome:
Actual limit seems to be 256KB for the whole HTTP header.
Error message appears: "Error 325 (net::ERR_RESPONSE_HEADERS_TOO_BIG): Unknown error."
Note to Firefox:
When sending data through multiple headers 100MB worked fine, just split up over 10'000 headers.
My Conclusion:
If you want to support all popular browsers 10KB per header seems to be the limit and 256KB for all headers together.
My PHP Code used to generate those responses:
<?php
ini_set('memory_limit', '1024M');
set_time_limit(90);
$header = "";
$bytes = 256000;
for($i=0;$i<$bytes;$i++) {
$header .= "1";
}
header("MyData: ".$header);
/* Firfox multiple headers
for($i=1;$i<1000;$i++) {
header("MyData".$i.": ".$header);
}*/
echo "Length of header: ".($bytes / 1024).' kilobytes';
?>
In practice, while there are rules prohibitting proxies from not passing certain headers (indeed, quite clear rules on which can be modified and even on how to inform a proxy on whether it can modify a new header added by a later standard), this only applies to "transparent" proxies, and not all proxies are transparent. In particular, some wipe headers they don't understand as a deliberate security practice.
Also, in practice some do misbehave (though things are much better than they were).
So, beyond the obvious core headers, the amount of header information you can depend on being passed from server to client is zero.
This is just one of the reasons why you should never depend on headers being used well (e.g., be prepared for the client to repeat a request for something it should have cached, or for the server to send the whole entity when you request a range), barring the obvious case of authentication headers (under the fail-to-secure principle).
Two things.
First of all, why not just run a test that gives the browser progressively larger and larger headers and wait till it hits a number that doesn't work? Just run it once in each browser. That's the most surefire way to figure this out. Even if it's not entirely comprehensive, you at least have some practical numbers to go off of, and those numbers will likely cover a huge majority of your users.
Second, I agree with everyone saying that this is a bad idea. It should not be hard to find a different solution if you are really that concerned about hitting the limit. Even if you do test on every browser, there are still firewalls, etc to worry about, and there is absolutely no way you will be able to test every combination (and I'm almost positive that no one else has done this before you). You will not be able to get a hard limit for every case.
Though in theory, this should all work out fine, there might later be that one edge case that bites you in the butt if you decide to do this.
TL;DR: This is a bad idea. Save yourself the trouble and find a real solution instead of a workaround.
Edit: Since you mention that the requests can come from several types of sources, why not just specify the source in the request header and have the data contained entirely in the body? Have some kind of Source or ClientType field in the header that specifies where the request is coming from. If it's coming from a browser, include the HTML in the body; if it's coming from a PHP application, put some PHP-specific stuff in there; etc etc. If the field is empty, don't add any extra data at all.
The RFC for HTTP/1.1 clearly does not limit the length of the headers or the body.
According to this page modern browsers (Firefox, Safari, Opera), with the exception of IE can handle very long URIs: https://web.archive.org/web/20191019132547/https://boutell.com/newfaq/misc/urllength.html. I know it is different from receiving headers, but at least shows that they can create and send huge HTTP requests (possibly unlimited length).
If there's any limit in the browsers it would be something like the size of the available memory or limit of a variable type, etc.
Theoretically, there's no limit to the amount of data that can be sent in the browser. It's almost like saying there's a limit to the amount of content that can be in the body of a web page.
If possible, try to transmit the data through the body of the document. To be on the safe side, consider splitting the data up, so that there are multiple passes for loading.

Resources