What data formats can AJAX transfer? - ajax

I'm new to AJAX, but as an overview I'd like to know what formats you can upload and download. Is it limited to JSON or XML or can you even send binary types like MP3 or UTF-8 HTML. And finally, do you have full control over the data, byte for byte in something like a byte array, or is only a string sent/received.

If we are talking about ajax we are talking about javascript? And about XMLHTTPRequest?
The XMLHttpRequest which is only a http request can transfer everything. But there is no byte array in javascript. Only strings, numbers and such. Every thing you get from an ajax call is a piece of text (responseText). That might be parsed into XML (which gives you reponseXML). Special encodings should be more a matter of the http transport.
The binary stuff is not ajax dependent but javascript dependent. There are some weird encodings for strings to deliver byte data inside in javascript (especially for images) but it is not a general solution.
HTML is not a problem and that is the most prominent use case. From this type of request you get an HTML string delivered and that is added to some node in the DOM per innerHTML that parses the HTML.

Since data is transported via HTTP you will have to make sure that you use some kind of encoding. One of the most popular is base64 encoding. You can find more information at: http://www.webtoolkit.info/javascript-base64.html
The methodology is to base64-encode the data you would like to send and then base64-decode the data at the server(or the client) and use the original data as you intended.

You can transfer any type of data either string or bytes

You can send anything you like, the problem may be how to handle it once you get it ;)
Standard HTML is probably the most common type of ajax content in use out there - you can choose character encoding too, although it's always best to stick with one type of encoding.

AJAX simply means you're transferring data asynchronously over HTTP with a JavaScript call. So your script makes a "normal" HTTP request using the XmlHttpRequest() object. However, as the name implies, it's really only suited for text-based data formats since you generally want to perform some action on the client side with the data you got back from the server (not always though, sometimes people just send XmlHttpRequests only to update something on the server).
On a side note, I have never seen an application where sending binary data would have been appropriate anyway.
Most often, people choose to send data over to the server with POST or GET (which is basically a method to transfer name-value pairs inherent to HTTP). For sending more complex data, for example hierarchical structures, they need to be encoded somehow. XML documents can be made natively per JavaScript, sent over to the server and get parsed into whatever data types necessary. But since XML can be a bit of a pain, many devs use JSON encoded data instead because it's easy to generate and easy to parse.
What the server sends back is equally as arbitrary. Usually, you specify a callback function in your Javascript that handles the incoming data. Again, the popular choices are XML and JSON, they parse easily into a document object or an array structure respectively. You could also send plain text or some other packaging but remember that you then have to take care of extracting the usable data from it yourself. Sometimes, it can also be beneficial to send actual HTML fragments to the client to update something on the page directly.
For starters, I suggest you have a look at JQuery. It's a very lightweight framework that abstracts many of evil compatibility stuff and lets you write AJAX requests very nicely.

You can move anything that can be sent over HTTP. There are restrictions about the call being made to the same domain as the page loaded from, but not on the content of the transfer. You can do either GET or POST transactions too.

There is a Digg the Blog entry titled DUI.Stream and MXHR that shows off what they call "Multipart XMLHttpRequests." It is alpha code now, but there is a demo that handles images.

Related

Save file from POST request to disk without storing in memory with Python's BaseHTTPServer

I'm writing an HTTP server in Python 2 with BaseHTTPServer, and it's assumed that it accepts multiple connections at the same time, on each connection the user can send a large file through a POST request. However my understanding is that the whole request will be stored in the server's memory before being processed, and multiple uploaded file at the same time can exceed the amount of memory on the server. Is there any way to, instead of storing the file/request in memory, stream it to a file on disk directly?
BaseHTTPServer doesn't come with a POST handler out of the box, so you'll have to implement it yourself or find an implementation that works for you. (These are easy to search for; here's one I found that looked straightforward.
Your question is similar to this question about limiting the max-size of POST; the answer points out you'll need to read through all that data in order to ensure proper browser functionality. The comments to that answer seem to indicate the use of other techniques ("e.g. AJAX and realtime notifications via WebSocket." #dmitry-nedbaylo)

Transfer XML as text or as Stream (Binary)

We would like to transfer a XML to a WEB API that can accept text as well as binary data.
What is the best way to transfer it in terms of traffic size?
Is it better to transfer it as clear text or as Stream of Binary data?
If you are concerned that the XML data you want to transfer is too large, then you can try using compression, gzip compression being the most popular. Web API has some built-in functionality for this but you could also "roll your own" if you like, for example if you want a different compression algorithm.
Fortunately, there's plenty of code around to help with compressing and decompressing your data stream. Take a look at the following:
MS nuget: https://www.nuget.org/packages/Microsoft.AspNet.WebApi.MessageHandlers.Compression/
http://benfoster.io/blog/aspnet-web-api-compression (blog article with a link to GitHub code)
https://github.com/benfoster/Fabrik.Common/tree/master/src/Fabrik.Common.WebAPI (the GitHub code mentioned above)
(SO) Compression filter for Web API
Finally, you could consider using Expect: 100-Continue. If an API client is about to send a request with a large entity body, like a POST, PUT, or PATCH, they can send “Expect: 100-continue” in their HTTP headers, and wait for a “100 Continue” response before sending their entity body. This allows the API server to verify much of the validity of the request before wasting bandwidth to return an error response (such as a 401 or a 403). Supporting this functionality is not very common, but it can improve API responsiveness and reduce bandwidth in some scenarios. (RFC2616 §8.2.3).
While I appreciate an answer full of links can be problematic if those links go out-of-date or get deleted, explaining Web API compression here is just too large a subject. I hope my answer steers you in a useful direction.

Is there a performance benefit switching jqGrid data source from XML to JSON (on IE8)

I have a jqGrid on a web page, with large data sets. Up to 100 rows (in XML format) are sometimes sent to the browser at a time. On IE8 the combined effect is a noticeable delay.
Will changing the data source to JSON (instead of XML) have a measurable effect in these conditions?
Note: I know this is an IE specific problem. On Chrome I get an instant response on the same page. But I'm currently targeting IE8 :(
JSON has native support in JavaScript, so in the most cases the working with JSON is more quickly. Moreover, the size of JSON response from the server are smaller as the corresponding XML response. So I would recommend you to switch to JSON.
Nevertheless in many cases the real jqGrid example can has more other performance bottlenecks which independent on the data format. Moreover you can also choose different implementation in JSON which represent your data. So the best recommendation one could get you if you append your question with the current jqGrid definition, define which is the best id for the data row and post the test XML data.
UPDATED: Look at some old answers about jqGrid performance optimization: this, this and this.

Client side processing with javascript vs server side with mod_perl

I have a perl script that converts strings to different encodings, like base64, ASCII or hex (both ways). Now I am writing ajax front end for it, and my question is; if I want to automate the detection of the encoding of the string submitted, is it more efficient to perform regex search on the string submitted with javascript before I send it to the server, or is it faster to leave it for the perl script to figure out what type of string?
To clarify, I am asking which of these two is better:
String submitted
Javascript detects the encoding
AJAX submits encoding and the string to perl script
Perl script returns decoded string
or
String submitted
AJAX submits the string to perl script
Perl script detects encoding and returns decoded string
Is there a particular rule of thumb where this type of processing should be performed, and what do you think is better (meaning faster) implementation?
You must validate your data on the server. Period. Otherwise you'll be sailing off into uncharted waters as soon as some two-bit wannabe "hacker" passes you a base64 string and a tag claiming that your javascript thinks it's hex.
Given this, it's up to you whether you want to also detect encoding on the client side. This has some potential benefits, since it allows you to not send data to the server at all if it's encoded in an invalid fashion or to tell the user what encoding was detected and allow them to correct it if it's an ambiguous case (e.g., hex digits are a subset of the base64 character set, so any hex string could potentially be base64). Just remember that, if an encoding gets passed to the server by the client, the server must still sanity-check the received encoding specifier and be prepared to ignore it (or reject the request completely) if it's inappropriate for the corresponding data.
This depends on the scale.
If there will be a LOT of client requests to do this, it's definitely "faster" to do it on the client side (e.g. in JS before the Ajax call), since putting it on the server side causes the server to process ALL those requests whch will compete for server's CPU resources, whereas client side you will only do one detection per client.
If you only anticipate very few concurrent requests, then doing it in Perl is probably marginally faster since Perl's regex implementation is likely better/faster than JavaScript (I don't have any stats to back this up, though) and presumably the server has better CPU.
But I would not really think that the server side margin would be terribly big considering the whole processing shouldn't take that long on either side, so I'd advise to go with client-side checking since that (as per the first paragraph) scales better.
If the performance difference between the two really matters to you a lot, you should actually implement both and benchmark under both the average anticipated and the maximum projected client loads.

Buffered Multipart Form Posts in Ruby

I am currently using Net::HTTP in a Ruby script to post files to a website via a multipart form post. It works great for small files, but I frequently have to send very large files using this script, and HTTP#post only seems to accept post data as a String object, which means that the file I'm sending has to be read into memory before anything can be sent. This script is running on a busy production server, so it's unacceptable to gobble up hundreds of megabytes of RAM just to send a file.
Ideally, there'd be a method that could be given a buffer size and an IO object, and would send off buffer-sized chunks of data, reading from the IO object only as required. What would be the best way to make this happen? Did I miss something relevant in Net::HTTP?
Update: Net::HTTP#body_stream(input) looks good, though the documentation is rather... sparse. Anyone able to point me to a good example of this in action?
Actually I managed to upload a file using body_stream. The full source code is here:
http://stanislavvitvitskiy.blogspot.com/2008/12/multipart-post-in-ruby.html
Use Net::HTTP#body_stream(input)
Example for multipart post without streaming:

Resources