What is the maximum possible length/size of a JSON object can pass in POST request using AJAX? [duplicate] - ajax

I am using jquery, JSON, and AJAX for a comment system. I am curious, is there a size limit on what you can send through/store with JSON? Like if a user types a large amount and I send it through JSON is there some sort of maximum limit?
Also can any kind of text be sent through JSON. for example sometime I allow users to use html, will this be ok?

JSON is similar to other data formats like XML - if you need to transmit more data, you just send more data. There's no inherent size limitation to the JSON request. Any limitation would be set by the server parsing the request. (For instance, ASP.NET has the "MaxJsonLength" property of the serializer.)

There is no fixed limit on how large a JSON data block is or any of the fields.
There are limits to how much JSON the JavaScript implementation of various browsers can handle (e.g. around 40MB in my experience). See this question for example.

It depends on the implementation of your JSON writer/parser. Microsoft's DataContractJsonSerializer seems to have a hard limit around 8kb (8192 I think), and it will error out for larger strings.
Edit:
We were able to resolve the 8K limit for JSON strings by setting the MaxJsonLength property in the web config as described in this answer: https://stackoverflow.com/a/1151993/61569

Surely everyone's missed a trick here. The current file size limit of a json file is 18,446,744,073,709,551,616 characters or if you prefer bytes, or even 2^64 bytes if you're looking at 64 bit infrastructures at least.
For all intents, and purposes we can assume it's unlimited as you'll probably have a hard time hitting this issue...

Implementations are free to set limits on JSON documents, including the size, so choose your parser wisely. See RFC 7159, Section 9. Parsers:
"An implementation may set limits on the size of texts that it accepts. An implementation may set limits on the maximum depth of nesting. An implementation may set limits on the range and precision of numbers. An implementation may set limits on the length and character contents of strings."

There is really no limit on the size of JSON data to be send or receive. We can send Json data in file too. According to the capabilities of browser that you are working with, Json data can be handled.

If you are working with ASP.NET MVC, you can solve the problem by adding the MaxJsonLength to your result:
var jsonResult = Json(new
{
draw = param.Draw,
recordsTotal = count,
recordsFiltered = count,
data = result
}, JsonRequestBehavior.AllowGet);
jsonResult.MaxJsonLength = int.MaxValue;

What is the requirement? Are you trying to send a large SQL Table as JSON object? I think it is not practical.
You will consume a large chunk of server resource if you push thru with this.
You will also not be able to measure the progress with a progress bar because your App will just wait for the server to reply back which would take ages.
What I recommend to do is to chop the request into batches say first 1000 then request again the next 1000 till you get what you need.
This way you could also put a nice progress bar to know the progress as extracting that amount of data could really take too long.

The maximum length of JSON strings. The default is 2097152 characters, which is equivalent to 4 MB of Unicode string data.
Refer below URL
https://learn.microsoft.com/en-us/dotnet/api/system.web.script.serialization.javascriptserializer.maxjsonlength?view=netframework-4.7.2

Related

Transferring lots of objects with Guid IDs to the client

I have a web app that uses Guids as the PK in the DB for an Employee object and an Association object.
One page in my app returns a large amount of data showing all Associations all Employees may be a part of.
So right now, I am sending to the client essentially a bunch of objects that look like:
{assocation_id: guid, employees: [guid1, guid2, ..., guidN]}
It turns out that many employees belong to many associations, so I am sending down the same Guids for those employees over and over again in these different objects. For example, it is possible that I am sending down 30,000 total guids across all associations in some cases, of which there are only 500 unique employees.
I am wondering if it is worth me building some kind of lookup index that I also send to the client like
{ 1: Guid1, 2: Guid2 ... }
and replacing all of the Guids in the objects I send down with those ints,
or if simply gzipping the response will compress it enough that this extra effort is not worth it?
Note: please don't get caught up in the details of if I should be sending down 30,000 pieces of data or not -- this is not my choice and there is nothing I can do about it (and I also can't change Guids to ints or longs in the DB).
Your wrote at the end of your question the following
Note: please don't get caught up in the details of if I should be
sending down 30,000 pieces of data or not -- this is not my choice and
there is nothing I can do about it (and I also can't change Guids to
ints or longs in the DB).
I think it's your main problem. If you don't solve the main problem you will be able to reduce the size of transferred data to 10 times for example, but you still don't solve the main problem. Let us we think about the question: Why so many data should be sent to the client (to the web browser)?
The data on the client side are needed to display some information to the user. The monitor is not so large to show 30,000 total on one page. No user are able to grasp so much information. So I am sure that you display only small part of the information. In the case you should send only the small part of information which you display.
You don't describe how the guids will be used on the client side. If you need the information during row editing for example. You can transfer the data only when the user start editing. In the case you need transfer the data only for one association.
If you need display the guids directly, then you can't display all the information at once. So you can send the information for one page only. If the user start to scroll or start "next page" button you can send the next portion of data. In the way you can really dramatically reduce the size of transferred data.
If you do have no possibility to redesign the part of application you can implement your original suggestion: by replacing of GUID "{7EDBB957-5255-4b83-A4C4-0DF664905735}" or "7EDBB95752554b83A4C40DF664905735" to the number like 123 you reduce the size of GUID from 34 characters to 3. If you will send additionally array of "guid mapping" elements like
123:"7EDBB95752554b83A4C40DF664905735",
you can reduce the original size of data 30000*34 = 1020000 (1 MB) to 300*39 + 30000*3 = 11700+90000 = 101700 (100 KB). So you can reduce the size of data in 10 times. The usage of compression of dynamic data on the web server can reduce the size of data additionally.
In any way you should examine why your page is so slowly. If the program works in LAN, then the transferring of even 1MB of data can be quick enough. Probably the page is slowly during placing of the data on the web page. I mean the following. If you modify some element on the page the position of all existing elements have to be recalculated. If you would be work with disconnected DOM objects first and then place the whole portion of data on the page you can improve the performance dramatically. You don't posted in the question which technology you use in you web application so I don't include any examples. If you use jQuery for example I could give some example which clear more what I mean.
The lookup index you propose is nothing else than a "custom" compression scheme. As amdmax stated, this will increase your performance if you have a lot of the same GUIDs, but so will gzip.
IMHO, the extra effort of writing the custom coding will not be worth it.
Oleg states correctly, that it might be worth fetching the data only when the user needs it. But this of course depends on your specific requirements.
if simply gzipping the response will compress it enough that this extra effort is not worth it?
The answer is: Yes, it will.
Compressing the data will remove redundant parts as good as possible (depending on the algorithm) until decompression.
To get sure, just send/generate the data uncompressed and compressed and compare the results. You can count the duplicate GUIDs to calculate how big your data block would be with the dictionary compression method. But I guess gzip will be better because it can also compress the syntactic elements like braces, colons, etc. inside your data object.
So what you are trying to accomplish is Dictionary compression, right?
http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression
What you will get instead of Guids which are 16 bytes long is int which is 4 bytes long. And you will get a dictionary full of key value pairs that will associate each guid to some int value, right?
It will decrease your transfer time when there're many objects with the same id used. But will spend CPU time before transfer to compress and after transfer to decompress. So what is the amount of data you transfer? Is it mb / gb / tb? And is there any good reason to compress it before sending?
I do not know how dynamic is your data, but I would
on a first call send two directories/dictionaries mapping short ids to long GUIDS, one for your associations and on for your employees e.g. {1: AssoGUID1, 2: AssoGUID2,...} and {1: EmpGUID1, 2:EmpGUID2,...}. These directories may also contain additional information on the Associations and Employees instances; I suspect you do not simply display GUIDs
on subsequent calls just send the index of Employees per Association { 1: [2,4,5], 3:[2,4], ...}, the key being the association short id and the ids in the array value, the short ids of the employees. Given your description building the reverse index: Employee to Associations may give better result size wise (but higher processing)
Then its all down to associative arrays manipulations which is straightforward in JS.
Again, if your data is (very) dynamic server side, the two directories will soon be obsolete and maintaining synchronization may cost you a lot.
I would start by answering the following questions:
What are the performance requirements? Are there size requirements? Speed requirements? What is the minimum performance that is truly needed?
What are the current performance metrics? How far are you from the requirements?
You characterized the data as possibly being mostly repeats. Is that the normal case? If not, what is?
The 2 options you listed above sound reasonable and trivial to implement. Try creating a look-up table and see what performance gains you get on actual queries. Try zipping the results (with look-ups and without), and see what gains you get.
In my experience if you're not TOO far from the goal, performance requirements are often trial and error.
If those options don't get you close to the requirements, I would take a step back and see if the requirements are reasonable in the time you have to solve the problem.
What you do next depends on which performance goals are lacking. If it is size, you're starting to be limited if you're required to send the entire association list ever time. Is that truly a requirement? Can you send the entire list once, and then just updates?

Request size vs Server load

I was writing the spec on how to organize the query parameters that are sent in a HTTP Request, and I came up with the following:
All parameters a prefixed with the entity to which they belong, an example "a.b", which is read "b of entity a", that way each parameter would be clearly mapped to the corresponding entity, but what if there were two different entities that share a query paramater?, to avoid repetition and request size I came up with the following micro format. To have a request wide entity called shared each property of shared will represent a property that is shared among entities, e.g.
POST /app/my/resource HTTP/1.1
a.p = v
b.p = v
c.p = v
d.p = v
Here it is clear that property p is shared among a,b,c and d so this could be sent as
POST /app/my/resource HTTP/1.1
shared.p = a:b:c:d%v
Now, the request is smaller and I'm being a bit more DRY, however this adds an extra burden to the server as it has to parse the string to process the values.
Probably in my example the differences are insignificant and I could chose either, but I'd like to know what do you think about it, what would you prefer, maybe the size of the request does not matter, or maybe the parsing of the string is not such a big deal when the length is short, but what happens when we scale the size of both the request and string which one would be better, what are the tradeoffs?
What you are showing is a compression algorithm. Beware that payloads often are compressed on protocol layer already (HTTP, gzip Content-Type, see HTTP compression examples). Compression algorithms are advanced enough to compress duplicate string-items, so probably you won't win much by a custom compression.
Generally try not to optimize prematurely. First show that you are having a response-time or payload-size issue and then optimize. Your compression algorithm itself is a good idea, but it makes payload more complicated as normal key/value pairs (xxx-form-urlencoded Content-Type). For maintenance reasons head for the simplest design possible.
Just to throw this out there, I think the answer would depend on what platform your back-end servers are running to process the requests. For example, the last time that I checked, the Perl-based mod_perl can parse those strings much faster something like ASP.NET.

Overflow problem with winsock in vb6

I have built a simple project which use "Winsock" Tool.
When I receive any data I put it in a variable because i cann't put it in a textbox because
it is a file not a text.
But if i send a big file it gets me an error.
"Overflow"
Are there any way to fix this problem ?
A VB variable-length string can only in theory be 2GB in size, it's actual maximum size is depending on available virtual memory which is also limited to 2GB for the entire application. But since VB stores the string in unicode format it means that it can only contain 1GB of text.
(maximum length for string in VB6)
If this is your problem, try splitting incoming data by several strings.
Are you handling the SendComplete event properly before sending more data?
Otherwise you will get a buffer overflow from the WinSock control.
You need to split your data into smaller packets (around 2-5k each should do it) and send each packet individually, then re-construct your packets at the other end. You could add a unique character at the end of the data so that the receiving end know that all the data has been received for that transmission say Chr(0)?
This is quite a simplified solution to this problem - a better method would be to devise a simple protocol for data handshaking so you know each packet has been received.

Recommended size of data returned by EJB API call

does anyone know if there is any limitation for the size of data that can be obtained in the output of EJB API call?
Let's say output to API should be an array of some complex objects. How long array can go?
We are planning to use pagination for retrieving data by portions and want to determine the ideal size of portion/bulk
does anyone know if there is any limitation for the size of data that can be obtained in the output of EJB API call?
To my knowledge, there is no hard limit on the size of the object returned in an RMI call. In practice, you might be limited by memory resources... and "time" (e.g. a transaction timeout) but this shouldn't happen if you're not doing insane things.
Let's say output to API should be an array of some complex objects. How long array can go?
Even if the answer to the previous question had been "X", I hope you realize it would still have been impossible to answer this one.
We are planning to use pagination for retrieving data by portions and want to determine the ideal size of portion/bulk
IMO, this is more a usability issue than a technical issue so I suggest to discuss it with your usability expert.
Talk to the customer/product owner/project placeholder and ask for their expectations. Write test automating the specification. Tweak the code until the tests are met.

What data formats can AJAX transfer?

I'm new to AJAX, but as an overview I'd like to know what formats you can upload and download. Is it limited to JSON or XML or can you even send binary types like MP3 or UTF-8 HTML. And finally, do you have full control over the data, byte for byte in something like a byte array, or is only a string sent/received.
If we are talking about ajax we are talking about javascript? And about XMLHTTPRequest?
The XMLHttpRequest which is only a http request can transfer everything. But there is no byte array in javascript. Only strings, numbers and such. Every thing you get from an ajax call is a piece of text (responseText). That might be parsed into XML (which gives you reponseXML). Special encodings should be more a matter of the http transport.
The binary stuff is not ajax dependent but javascript dependent. There are some weird encodings for strings to deliver byte data inside in javascript (especially for images) but it is not a general solution.
HTML is not a problem and that is the most prominent use case. From this type of request you get an HTML string delivered and that is added to some node in the DOM per innerHTML that parses the HTML.
Since data is transported via HTTP you will have to make sure that you use some kind of encoding. One of the most popular is base64 encoding. You can find more information at: http://www.webtoolkit.info/javascript-base64.html
The methodology is to base64-encode the data you would like to send and then base64-decode the data at the server(or the client) and use the original data as you intended.
You can transfer any type of data either string or bytes
You can send anything you like, the problem may be how to handle it once you get it ;)
Standard HTML is probably the most common type of ajax content in use out there - you can choose character encoding too, although it's always best to stick with one type of encoding.
AJAX simply means you're transferring data asynchronously over HTTP with a JavaScript call. So your script makes a "normal" HTTP request using the XmlHttpRequest() object. However, as the name implies, it's really only suited for text-based data formats since you generally want to perform some action on the client side with the data you got back from the server (not always though, sometimes people just send XmlHttpRequests only to update something on the server).
On a side note, I have never seen an application where sending binary data would have been appropriate anyway.
Most often, people choose to send data over to the server with POST or GET (which is basically a method to transfer name-value pairs inherent to HTTP). For sending more complex data, for example hierarchical structures, they need to be encoded somehow. XML documents can be made natively per JavaScript, sent over to the server and get parsed into whatever data types necessary. But since XML can be a bit of a pain, many devs use JSON encoded data instead because it's easy to generate and easy to parse.
What the server sends back is equally as arbitrary. Usually, you specify a callback function in your Javascript that handles the incoming data. Again, the popular choices are XML and JSON, they parse easily into a document object or an array structure respectively. You could also send plain text or some other packaging but remember that you then have to take care of extracting the usable data from it yourself. Sometimes, it can also be beneficial to send actual HTML fragments to the client to update something on the page directly.
For starters, I suggest you have a look at JQuery. It's a very lightweight framework that abstracts many of evil compatibility stuff and lets you write AJAX requests very nicely.
You can move anything that can be sent over HTTP. There are restrictions about the call being made to the same domain as the page loaded from, but not on the content of the transfer. You can do either GET or POST transactions too.
There is a Digg the Blog entry titled DUI.Stream and MXHR that shows off what they call "Multipart XMLHttpRequests." It is alpha code now, but there is a demo that handles images.

Resources