Related
I'm providing RESTful API to my (JS) client from (Java Spring) server.
Main site page contains a number of logical blocks (news, last comments, some trending stuff), each of them has a corresponding entity on server. Which way is a right one to go, handle one request like
/api/main_page/ ->
{
news: {...}
comments: {...}
...
}
or let the client do a few requests like
/api/news/
/api/comments/
...
I know in general it's better to have one large request/response, but is this an answer to this situation as well?
Ideally, you should have different API calls for fetching individual configurable content blocks of the page from the same API.
This way your content blocks are loosely bounded to each other.
You
can extend, port(to a new framework) and modify them independently at
anytime you want.
This comes extremely useful when application grows.
Switching off a feature is fairly easy in this
case.
A/B testing is also easy in this case.
Writing automation is
also very easy.
Overall it helps in reducing the testing efforts.
But if you really want to fetch this in one call. Then you should add additional params in request and when the server sees that additional param it adds the additional independent JSON in the response by calling it's own method from BL layer.
And, if speed is your concern then try caching these calls on server for some time(depends on the type of application).
I think in general multiple requests can be justified, when the requested resources reflect parts of the system state. (my personal rule of thumb, still WIP).
i.e. if a news gets displayed in your client application a lot, I would request it once and reuse it wherever I can. If you aggregate here, you would need to request for it later, maybe some of them never get actually displayed, and you have some magic to do if the representation of a news differs in the aggregation and /news/{id}-resource.
This approach would increase communication if the page gets loaded for the first time, but decrease communication throughout your client application the longer it runs.
The state on the server gets copied request by request to your client or updated when needed (Etags, last-modified, etc.).
In your example it looks like /news and /comments are some sort of latest or since last visit, but not all.
If this is true, I would design them to be a resurce as well, like /comments/latest or similar.
But in any case I would them only have self-links to the /news/{id} or /comments/{id} respectively. Then you would have a request to /comments/latest, what results in a list of news-self-links, for what I would start a request only if I don't already have that news (maybe I want to check if the cached copy is still up to date).
It is also possible to trigger the request to a /news/{id} only if it gets actually displayed (scrolling, swiping).
Probably the lifespan of a news or a comment is a criterion to answer this question. Meaning the caching in the client it is not that vital to the system, in opposite of a book in an Book store app.
The client (an AngularJS application) gets rather big lists from the server. The lists may have hundreds or thousands of elements, which can mean a few megabytes uncompressed (and some users (admins) get much more data).
I'm not planning to let the client get partial results as sorting and filtering should not bother the server.
Compression works fine (factor of about 10) and as the lists don't change often, 304 NOT MODIFIED helps a lot, too. But another important optimization is missing:
As a typical change of the lists are rather small (e.g., modifying two elements and adding a new one), transferring the changes only sounds like a good idea. I wonder how to do it properly.
Something like GET /offer/123/items should always return all the items in the offer number 123, right? Compression and 304 can be used here, but no incremental update. A request like GET /offer/123/items?since=1495765733 sounds like the way to go, but then browser caching does not get used:
either nothing has changed and the answer is empty (and caching it makes no sense)
or something has changed, the client updates its state and does never ask for changes since 1495765733 anymore (and caching it makes even less sense)
Obviously, when using the "since" query, nothing will be cached for the "resource" (the original query gets used just once or not at all).
So I can't rely on the browser cache and I can only use localStorage or sessionStorage, which have a few downsides:
it's limited to a few megabytes (the browser HTTP cache may be much bigger and gets handled automatically)
I have to implement some replacement strategy when I hit the limit
the browser cache stores already compressed data which I don't get (I'd have to re-compress them)
it doesn't work for the users (admins) getting bigger lists as even a single list may already be over limit
it gets emptied on logout (a customer's requirement)
Given that there's HTML 5 and HTTP 2.0, that's pretty unsatisfactory. What am I missing?
Is it possible to use the browser HTTP cache together with incremental updates?
I think there is one thing you are missing: in short, headers. What I'm thinking you could do and that would match (most) of your requirements, would be to:
First GET /offer/123/items is done normally, nothing special.
Subsequents GET /offer/123/items will be sent with a Fetched-At: 1495765733 header, indicating your server when the initial request has been sent.
From this point on, two scenarios are possible.
Either there is no change, and you can send the 304.
If there is a change however, return the new items since the time stamp previously sent has headers, but set a Cache-Control: no-cache from your response.
This leaves you to the point where you can have incremental updates, with caching of the initial megabytes-sized elements.
There is still one drawback though, that the caching is only done once, it won't cache updates. You said that your lists are not updated often so it might already work for you, but if you really want to push this further, I could think of one more thing.
Upon receiving an incremental update, you could trigger in the background another request without the Fetched-At header that won't be used at all by your application, but will just be there to update your http cache. It should not be as bad as it sounds performance-wise since your framework won't update its data with the new one (and potentially trigger re-renders), the only notable drawback would be in term of network and memory consumption. On mobile it might be problematic, but it doesn't sounds like an app intended to be displayed on them anyway.
I absolutely don't know your use-case and will just throw that out there, but are you really sure that doing some sort of pagination won't work? Megabytes of data sounds a lot to display and process for normal humans ;)
I would ditch the request/response cycle entirely and move to a push model.
Specifically, WebSockets.
This is the standard technology used on financial trading websites serving tables of real-time ticker data. Here is one such production application demonstrating the power of WebSockets:
https://www.poloniex.com/exchange#btc_eth
WebSocket applications have two types of state: global and user. The above link will show three tables of global data. When you're logged in, two aditional tables of user data are displayed at the bottom.
This is not HTTP; you won't be able to just slap this into a Java Servlet. You'll need to run a separate process on your server which communicates over TCP. The good news is, there are mature solutions readily available. A Java-based solution with a very decent free licensing option, which includes both client and server APIs (and does integrate with Angular2) is Lightstreamer. They have a well-organized demo page too. There are also adapters available to integrate with your data sources.
You may be hesitant to ditch your existing servlet approach, but this will be less headaches in the long run, and scales marvelously. HTTP polling, even with well-designed header-only requests, do not scale well with large lists which update frequently.
---------- EDIT ----------
Since the list updates are infrequent, WebSockets are probably overkill. Based on the further details provided by comments on this answer, I would recommend a DOM-based, AJAX-updated sorter and filterer such as DataTables, which has some built-in options for caching. In order to reuse client data across sessions, ajax requests in the previous link should be modified to save the current data in the table to localStorage after every ajax request, and when the client starts a new session, populate the table with this data. This will allow the plugin to manage the filtering, sorting, caching and browser-based persistence.
I'm thinking about something similar to Aperçu's idea, but using two requests. The idea is yet incomplete, so bear with me...
The client asks for GET /offer/123/items, possibly with the ETag and Fetched-At headers.
The server answers with
200 and a full list if either header is missing, or when there are too many changes since the Fetched-At timestamp
304 if nothing has changed since then
304 and a special Fetch-More header telling the client that more data is to be fetched otherwise
The last case is violating how HTTP should work, but AFAIK it's the only way letting the browser cache everything what I want it to cache. Since the whole communication is encrypted, proxies can't punish me for violating the spec.
The client reacts to Fetch-Errata by requesting GET /offer/123/items/errata. This way, the resource has got split into two requests. The split is ugly, but an angular $http interceptor can hide the ugliness from the application.
The second request is cacheable, too, and there can be also a Fetched-At header. The details are unclear, but some strong handwavium makes me believe that it can work. Actually, the errata could itself be inaccurate but still useful and get an errata itself.... etc.
With HTTP/1.1, more requests may mean more latency, but having a couple of them should still be profitable because of the saved bandwidth. The server can decide when to stop.
With HTTP/2, multiple requests could be send at once. The server could be make to handle them efficiently as it knows that they belong together. Some more handwavium...
I find the idea strange, but interesting and I'm looking forward to comments. Feel free to downvote me, but please leave an explanation.
Is there a RESTful way to determine whether a POST (or any other non-idempotent verb) will succeed? This would seem to be useful in cases where you essentially need to do multiple idempotent requests against different services, any of which might fail. It would be nice if these requests could be done in a "transaction" (i.e. with support for rollback), but since this is impossible, an alternative is to check whether each of the requests will succeed before actually performing them.
For example suppose I'm building an ecommerce system that allows people to buy t-shirts with custom text printed on them, and this system requires integrating with two different services: a t-shirt printing service, and a payment service. Each of these has a RESTful API, and either might fail. (e.g. the printing company might refuse to print certain words on a t-shirt, say, and the bank might complain if the credit card has expired.) Is there any way to speculatively perform these two requests, so my system will only proceed with them if both requests appear valid?
If not, can this problem be solved in a different way? Creating a resource via a POST with status = pending, and changing this to status = complete if all requests succeed? (DELETE is more tricky...)
HTTP defines the 202 status code for exactly your scenario:
202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
Source: HTTP 1.1 Status Code Definition
This is similar to 201 Created, except that you are indicating that the request has not been completed and the entity has not yet been created. Your response would contain a URL to the resource representing the "order request", so clients can check the status of the order through this URL.
To answer your question more directly: There is no way to "test" whether a request will succeed before you make it, because you're asking for clairvoyance.
It's not possible to foresee the range of technical problems that could occur when you attempt to make a request in the future. The network may be unavailable, the server may not be able to access its database or external systems it depends on for functioning, there may be a power-cut and the server is offline, a stray neutrino could wander into your memory and bump a 0 to a 1 causing a catastrophic kernel fault.
In order to consume a remote service you need to account for possible failures of any request in isolation of any other processes.
For your specific problem, if the services have no transactional safety, you can't bake any in there and you have to deal with this in a more real-world way. A few options off the top of my head:
Get the T-Shirt company to give you a "test" mechanism, so you can see whether they'll process any given order without actually placing it. It could be that placing an order with them is a two-phase operation, where you construct the order in the first phase (at which time they validate its creation) and then you subsequently ask the order to be processed (after you have taken payment successfully).
Take the credit-card payment first and move your order into a "paid" state. Then attempt to fulfil the order with the T-Shirt service as an asynchronous process. If fulfilment fails and you can identify that the customer tried to get something printed the company is not prepared to produce, you will have to contact them to change their order or produce a refund.
Most organizations will adopt the second approach, due to its technical simplicity and reduced risk to the business. It also has the benefit of being able to cope with the T-Shirt service not being available; the asynchronous process simply waits until the service is available and completes the order at that time.
Exactly. That can be done as you suggest in your last sentence. The idea would be to decopule resource creation (that will always work unless network failures) that represents an "ongoing request" of the "order acceptation", that can be later decided. As POST returns a "Location" header, you can then retrieve in any moment the "status" of your request.
At some point it may become either accepted or rejected. This may be intantaneous or it may take some time, so you have to design your service with these restrictions (i.e. allowing the client to check if his/her order is accepted, or running some kind of hourly/daily service that collect accepted requests).
I'm struggling to apply RESTful principles to a new web application I'm working on. In particular, it's the idea that to be RESTful, each HTTP request should carry enough information by itself for its recipient to process it to be in complete harmony with the stateless nature of HTTP.
The application allows users to search for medications. The search accepts filters as input, for example, return discontinued medicines, include complimentary therapy etc..etc. In total there are around 30 filters that can be applied.
Additionally, patient details can be entered including the patients age, gender, current medications etc.
To be Restful, should all this information be included with every request? This seems to place a huge overhead on the network. Also, wouldn't the restrictions on URL length, at least for GET, make this unfeasible?
The "Filter As Resource" is a perfect tact for this.
You can PUT the filter definition to the filter resource, and it can return the filter ID.
PUT is idempotent, so even if the filter is already there, you just need to detect that you've seen the filter before, so you can return the proper ID for the filter.
Then, you can add a filter parameter to your other requests, and they can grab the filter to use for the queries.
GET /medications?filter=1234&page=4&pagesize=20
I would run the raw filters through some sort of canonicalization process, just to have a normalized set, so that, e.g. filter "firstname=Bob lastname=Eubanks" is identical to "lastname=Eubanks firstname=Bob". That's just me though.
The only real concern is that, as time goes on, you may need to obsolete some filters. You can simply error out the request should someone make a request with a missing or obsolete filter.
Edit answering question...
Let's start with the fundamentals.
Simply, you want to specify a filter for use in queries, but these filters are (potentially) involved and complicated. If it was simple /medications/1234, this wouldn't be a problem.
Effectively, you always need to send the filter to the query. The question is how to represent that filter.
The fundamental issue with things like sessions in REST systems is that they're typically managed "out of band". When you, say, go and create a medication, you PUT or POST to the medications resource, and you get a reference back to that medication.
With a session, you would (typically) get back a cookie, or perhaps some other token to represent that session. If your PUT to the medications resource created a session also, then, in truth, your request created two resources: a medication, and a session.
Unfortunately, when you use something like a cookie, and you require that cookie for your request, the resource name is no longer the true representation of the resource. Now it's the resource name (the URL), and the cookie.
So, if I do a GET on the resource named /medications/search, and the cookie represents a session, and that session happens to have a filter in it, you can see how in effect, that resource name, /medications/search, isn't really useful at all. I don't have all of the information I need to make effective use, because of the side effect of the cookie and the session and the filter therein.
Now, you could perhaps rewrite the name: /medications/search?session=ABC123, effectively embedding the cookie in the resource name.
But now you run in to the typical contract of sessions, notably that they're short lived. So, that named resource is less useful, long term, not useless, just less useful. Right now, this query gives me interesting data. Tomorrow? Probably not. I'll get some nasty error about the session being gone.
The other problem is that sessions typically are not managed as a resource. For example, they're usually a side effect, vs explicitly managed via GET/PUT/DELETE. Sessions are also the "garbage heap" of web app state. In this case, we're just kind of hoping that the session is properly populated with what is needed for this request. We actually don't really know. Again, it's a side effect.
Now, let's turn it on its head a little bit. Let's use /medications/search?filter=ABC123.
Obviously, casually, this looks identical. We just changed the name from 'session' to 'filter'. But, as discussed, Filters, in this case, ARE a "first class resource". They need to be created, managed, etc. the same as a medication, a JPEG, or any other resource in your system. This is the key distinction.
Certainly, you could treat "sessions" as a first class resource, creating them, putting stuff in them directly, etc. But you can see how, at least from a clarity point of view, a "first class" session isn't really a good abstraction for this case. Using a session, its like going to the cleaners and handing over your entire purse or briefcase. "Yea, the ticket is in there somewhere, dig out what you want, give me my clothes", especially compared to something explicit like a filter.
So, you can see how, at 30,000 feet, there's not a lot of difference in the case between a filter and a session. But when you zoom in, they're quite different.
With the filter resource, you can choose to make them a persistent thing forever and ever. You can expire them, you can do whatever you want. Sessions tend to have pre-conceived semantics: short live, duration of the connection, etc. Filters can have any semantics you want. They're completely separate from what comes with a session.
If I were doing this, how would I work with filters?
I would assume that I really don't care about the content of a filter. Specifically, I doubt I would ever query for "all filters that search by first name". At this juncture, it seems like uninteresting information, so I won't design around it.
Next, I would normalize the filters, like I mentioned above. Make sure that equivalent filters truly are equivalent. You can do this by sorting the expressions, ensuring fieldnames are all uppercase, or whatever.
Then, I would store the filter as an XML or JSON document, whichever is more comfortable/appropriate for the application. I would give each filter a unique key (naturally), but I would also store a hash for the actual document with the filter.
I would do this to be able to quickly find if the filter is already stored. Since I'm normalizing it, I "know" that the XML (say) for logically equivalent filters would be identical. So, when someone goes to PUT, or insert a new filter, I would do a check on the hash to see if it has been stored before. I may well get back more than one (hashes can collide, of course), so I'll need to check the actual XML payloads to see whether they match.
If the filters match, I return a reference to the existing filter. If not, I'd create a new one and return that.
I also would not allow a filter UPDATE/POST. Since I'm handing out references to these filters, I would make them immutable so the references can remain valid. If I wanted a filter by "role", say, the "get all expire medications filter", then I would create a "named filter" resource that associates a name with a filter instance, so that the actual filter data can change but the name remain the same.
Mind, also, that during creation, you're in a race condition (two requests trying to make the same filter), so you would have to account for that. If your system has a high filter volume, this could be a potential bottleneck.
Hope this clarifies the issue for you.
To be Restful, should all this information be included with every request?
No. If it looks like your server is sending (or receiving) too much information, chances are that there are one or more resources you haven't yet identified.
The first and most important step in designing a RESTful system is to identify and name your resources. How would you do that for your system?
From your description, here's one possible set of resources:
User - a user of the system (maybe a doctor or patient (?) - Role might need to be exposed as a resource here)
Medication - the stuff in the bottle, but it also might represent the kind of bottle (quantity and contents), or it might represent a particular bottle - depending on if you're a pharmacy or just a help desk.
Disease - the condition for which a Patient might want to take a Medication.
Patient - a person who might take a Medication
Recommendation - a Medication that might be beneficial to a Patient based on a Disease they suffer from.
Then you could look for relationships among resources;
User has and belongs to many Roles
Medication has and belongs to many Diseases
Disease has many Recommendations.
Patient has and belongs to many Medications and Diseases (poor chap)
Patient has many Recommendations
Recommendation has one Patient and has one Disease
The specifics are probably not right for your particular problem, but the idea is simple: create a network of relationships among your resources.
At this point it might be helpful to think about URI structure, although keep in mind that REST APIs must be hypertext-driven:
# view all Recommendations for the patient
GET http://server.com/patients/{patient}/recommendations
# view all Recommendations for a Medication
GET http://servier.com/medications/{medication}/recommendations
# add a new Recommendation for a Patient
PUT http://server.com/patients/{patient}/recommendations
Because this is REST, you'll spend most of your time defining the media types used to transfer representations of your resources between client and server.
By exposing more resources, you can cut down on the amount of data that needs to be transferred during each request. Also notice there are no query parameters in the URIs. The server can be as stateful as it needs to be to keep track of it all, and each request can be fully self-contained.
REST is for APIs, not (typical) applications. Don't try to wedge a fundamentally stateful interaction into a stateless model just because you read about it on wikipedia.
To be Restful, should all this information be included with every request? This seems to place a huge overhead on the network. Also, wouldn't the restrictions on URL length, at least for GET, make this unfeasible?
The size of parameters is usually insignificant compared to the size of resources the server sends. If you're using such large parameters that they are a network burden, place them on the server once and then use them as resources.
There are no significant restrictions on URL length -- if your server has such a limit, upgrade it. It's probably years old and chock-full of security vulnerabilities anyway.
No all of that does not have to be in every request.
Each resource (medication, patient history, etc) should have a canonical URI that uniquely identifies it. In some applications (eg, Rails-based ones) this will be something like "/patients/1234" or "/drugs/5678" but the URL format is unimportant.
A client that has previously obtained the URI for a resource (such as from a search, or from a link embedded in another resource) can retrieve it using this URI.
Are you working on a RESTful API that other apps will use to search your data? Or are you building a end-user focused web application where users will log in and perform these searches?
If your users are logging in, then you're already stateful as you'll have some type of session cookie to maintain the logged in state. I would go ahead and create a session object that contains all the search filters. If a user hasn't set any filters, then this object will be empty.
Here's a great blog post about using GET vs POST. It mentions a URL length limit set by Internet Explorer of 2,048 characters, so you want to use POST for long requests.
http://carsonified.com/blog/dev/the-definitive-guide-to-get-vs-post/
From what I can gather, there are three categories:
Never use GET and use POST
Never use POST and use GET
It doesn't matter which one you use.
Am I correct in assuming those three cases? If so, what are some examples from each case?
Use POST for destructive actions such as creation (I'm aware of the irony), editing, and deletion, because you can't hit a POST action in the address bar of your browser. Use GET when it's safe to allow a person to call an action. So a URL like:
http://myblog.org/admin/posts/delete/357
Should bring you to a confirmation page, rather than simply deleting the item. It's far easier to avoid accidents this way.
POST is also more secure than GET, because you aren't sticking information into a URL. And so using GET as the method for an HTML form that collects a password or other sensitive information is not the best idea.
One final note: POST can transmit a larger amount of information than GET. 'POST' has no size restrictions for transmitted data, whilst 'GET' is limited to 2048 characters.
In brief
Use GET for safe andidempotent requests
Use POST for neither safe nor idempotent requests
In details
There is a proper place for each. Even if you don't follow RESTful principles, a lot can be gained from learning about REST and how a resource oriented approach works.
A RESTful application will use GETs for operations which are both safe and idempotent.
A safe operation is an operation which does not change the data requested.
An idempotent operation is one in which the result will be the same no matter how many times you request it.
It stands to reason that, as GETs are used for safe operations they are automatically also idempotent. Typically a GET is used for retrieving a resource (a question and its associated answers on stack overflow for example) or collection of resources.
A RESTful app will use PUTs for operations which are not safe but idempotent.
I know the question was about GET and POST, but I'll return to POST in a second.
Typically a PUT is used for editing a resource (editing a question or an answer on stack overflow for example).
A POST would be used for any operation which is neither safe or idempotent.
Typically a POST would be used to create a new resource for example creating a NEW SO question (though in some designs a PUT would be used for this also).
If you run the POST twice you would end up creating TWO new questions.
There's also a DELETE operation, but I'm guessing I can leave that there :)
Discussion
In practical terms modern web browsers typically only support GET and POST reliably (you can perform all of these operations via javascript calls, but in terms of entering data in forms and pressing submit you've generally got the two options). In a RESTful application the POST will often be overriden to provide the PUT and DELETE calls also.
But, even if you are not following RESTful principles, it can be useful to think in terms of using GET for retrieving / viewing information and POST for creating / editing information.
You should never use GET for an operation which alters data. If a search engine crawls a link to your evil op, or the client bookmarks it could spell big trouble.
Use GET if you don't mind the request being repeated (That is it doesn't change state).
Use POST if the operation does change the system's state.
Short Version
GET: Usually used for submitted search requests, or any request where you want the user to be able to pull up the exact page again.
Advantages of GET:
URLs can be bookmarked safely.
Pages can be reloaded safely.
Disadvantages of GET:
Variables are passed through url as name-value pairs. (Security risk)
Limited number of variables that can be passed. (Based upon browser. For example, Internet Explorer is limited to 2,048 characters.)
POST: Used for higher security requests where data may be used to alter a database, or a page that you don't want someone to bookmark.
Advantages of POST:
Name-value pairs are not displayed in url. (Security += 1)
Unlimited number of name-value pairs can be passed via POST. Reference.
Disadvantages of POST:
Page that used POST data cannot be bookmark. (If you so desired.)
Longer Version
Directly from the Hypertext Transfer Protocol -- HTTP/1.1:
9.3 GET
The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI. If the Request-URI refers to a data-producing process, it is the produced data which shall be returned as the entity in the response and not the source text of the process, unless that text happens to be the output of the process.
The semantics of the GET method change to a "conditional GET" if the request message includes an If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match, or If-Range header field. A conditional GET method requests that the entity be transferred only under the circumstances described by the conditional header field(s). The conditional GET method is intended to reduce unnecessary network usage by allowing cached entities to be refreshed without requiring multiple requests or transferring data already held by the client.
The semantics of the GET method change to a "partial GET" if the request message includes a Range header field. A partial GET requests that only part of the entity be transferred, as described in section 14.35. The partial GET method is intended to reduce unnecessary network usage by allowing partially-retrieved entities to be completed without transferring data already held by the client.
The response to a GET request is cacheable if and only if it meets the requirements for HTTP caching described in section 13.
See section 15.1.3 for security considerations when used for forms.
9.5 POST
The POST method is used to request that the origin server accept the
entity enclosed in the request as a new subordinate of the resource
identified by the Request-URI in the Request-Line. POST is designed
to allow a uniform method to cover the following functions:
Annotation of existing resources;
Posting a message to a bulletin board, newsgroup, mailing list,
or similar group of articles;
Providing a block of data, such as the result of submitting a
form, to a data-handling process;
Extending a database through an append operation.
The actual function performed by the POST method is determined by the
server and is usually dependent on the Request-URI. The posted entity
is subordinate to that URI in the same way that a file is subordinate
to a directory containing it, a news article is subordinate to a
newsgroup to which it is posted, or a record is subordinate to a
database.
The action performed by the POST method might not result in a
resource that can be identified by a URI. In this case, either 200
(OK) or 204 (No Content) is the appropriate response status,
depending on whether or not the response includes an entity that
describes the result.
The first important thing is the meaning of GET versus POST :
GET should be used to... get... some information from the server,
while POST should be used to send some information to the server.
After that, a couple of things that can be noted :
Using GET, your users can use the "back" button in their browser, and they can bookmark pages
There is a limit in the size of the parameters you can pass as GET (2KB for some versions of Internet Explorer, if I'm not mistaken) ; the limit is much more for POST, and generally depends on the server's configuration.
Anyway, I don't think we could "live" without GET : think of how many URLs you are using with parameters in the query string, every day -- without GET, all those wouldn't work ;-)
Apart from the length constraints difference in many web browsers, there is also a semantic difference. GETs are supposed to be "safe" in that they are read-only operations that don't change the server state. POSTs will typically change state and will give warnings on resubmission. Search engines' web crawlers may make GETs but should never make POSTs.
Use GET if you want to read data without changing state, and use POST if you want to update state on the server.
My general rule of thumb is to use Get when you are making requests to the server that aren't going to alter state. Posts are reserved for requests to the server that alter state.
One practical difference is that browsers and webservers have a limit on the number of characters that can exist in a URL. It's different from application to application, but it's certainly possible to hit it if you've got textareas in your forms.
Another gotcha with GETs - they get indexed by search engines and other automatic systems. Google once had a product that would pre-fetch links on the page you were viewing, so they'd be faster to load if you clicked those links. It caused major havoc on sites that had links like delete.php?id=1 - people lost their entire sites.
Use GET when you want the URL to reflect the state of the page. This is useful for viewing dynamically generated pages, such as those seen here. A POST should be used in a form to submit data, like when I click the "Post Your Answer" button. It also produces a cleaner URL since it doesn't generate a parameter string after the path.
Because GETs are purely URLs, they can be cached by the web browser and may be better used for things like consistently generated images. (Set an Expiry time)
One example from the gravatar page: http://www.gravatar.com/avatar/4c3be63a4c2f539b013787725dfce802?d=monsterid
GET may yeild marginally better performance, some webservers write POST contents to a temporary file before invoking the handler.
Another thing to consider is the size limit. GETs are capped by the size of the URL, 1024 bytes by the standard, though browsers may support more.
Transferring more data than that should use a POST to get better browser compatibility.
Even less than that limit is a problem, as another poster wrote, anything in the URL could end up in other parts of the brower's UI, like history.
1.3 Quick Checklist for Choosing HTTP GET or POST
Use GET if:
The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup).
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or
The user be held accountable for the results of the interaction.
Source.
There is nothing you can't do per-se. The point is that you're not supposed to modify the server state on an HTTP GET. HTTP proxies assume that since HTTP GET does not modify the state then whether a user invokes HTTP GET one time or 1000 times makes no difference. Using this information they assume it is safe to return a cached version of the first HTTP GET. If you break the HTTP specification you risk breaking HTTP client and proxies in the wild. Don't do it :)
This traverses into the concept of REST and how the web was kinda intended on being used. There is an excellent podcast on Software Engineering radio that gives an in depth talk about the use of Get and Post.
Get is used to pull data from the server, where an update action shouldn't be needed. The idea being is that you should be able to use the same GET request over and over and have the same information returned. The URL has the get information in the query string, because it was meant to be able to be easily sent to other systems and people like a address on where to find something.
Post is supposed to be used (at least by the REST architecture which the web is kinda based on) for pushing information to the server/telling the server to perform an action. Examples like: Update this data, Create this record.
i dont see a problem using get though, i use it for simple things where it makes sense to keep things on the query string.
Using it to update state - like a GET of delete.php?id=5 to delete a page - is very risky. People found that out when Google's web accelerator started prefetching URLs on pages - it hit all the 'delete' links and wiped out peoples' data. Same thing can happen with search engine spiders.
POST can move large data while GET cannot.
But generally it's not about a shortcomming of GET, rather a convention if you want your website/webapp to be behaving nicely.
Have a look at http://www.w3.org/2001/tag/doc/whenToUseGet.html
From RFC 2616:
9.3 GET
The GET method means retrieve whatever information (in the form of
an entity) is identified by the
Request-URI. If the Request-URI refers
to a data-producing process, it is the
produced data which shall be returned
as the entity in the response and not
the source text of the process, unless
that text happens to be the output of
the process.
9.5 POST The POST method is used to request that the origin server
accept the entity enclosed in the
request as a new subordinate of the
resource identified by the Request-URI
in the Request-Line. POST is designed
to allow a uniform method to cover the
following functions:
Annotation of existing resources;
Posting a message to a bulletin board, newsgroup, mailing list, or
similar group of articles;
Providing a block of data, such as the result of submitting a form, to a
data-handling process;
Extending a database through an append operation.
The actual function performed by the
POST method is determined by the
server and is usually dependent on the
Request-URI. The posted entity is
subordinate to that URI in the same
way that a file is subordinate to a
directory containing it, a news
article is subordinate to a newsgroup
to which it is posted, or a record is
subordinate to a database.
The action performed by the POST
method might not result in a resource
that can be identified by a URI. In
this case, either 200 (OK) or 204 (No
Content) is the appropriate response
status, depending on whether or not
the response includes an entity that
describes the result.
I use POST when I don't want people to see the QueryString or when the QueryString gets large. Also, POST is needed for file uploads.
I don't see a problem using GET though, I use it for simple things where it makes sense to keep things on the QueryString.
Using GET will allow linking to a particular page possible too where POST would not work.
The original intent was that GET was used for getting data back and POST was to be anything. The rule of thumb that I use is that if I'm sending anything back to the server, I use POST. If I'm just calling an URL to get back data, I use GET.
Read the article about HTTP in the Wikipedia. It will explain what the protocol is and what it does:
GET
Requests a representation of the specified resource. Note that GET should not be used for operations that cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a request should cause.
and
POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both.
The W3C has a document named URIs, Addressability, and the use of HTTP GET and POST that explains when to use what. Citing
1.3 Quick Checklist for Choosing HTTP GET or POST
Use GET if:
The interaction is more like a question (i.e., it is a
safe operation such as a query, read operation, or lookup).
and
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or
o The user be held accountable for the results of the interaction.
However, before the final decision to use HTTP GET or POST, please also consider considerations for sensitive data and practical considerations.
A practial example would be whenever you submit an HTML form. You specify either post or get for the form action. PHP will populate $_GET and $_POST accordingly.
In PHP, POST data limit is usually set by your php.ini. GET is limited by server/browser settings I believe - usually around 255 bytes.
From w3schools.com:
What is HTTP?
The Hypertext Transfer Protocol (HTTP) is designed to enable
communications between clients and servers.
HTTP works as a request-response protocol between a client and server.
A web browser may be the client, and an application on a computer that
hosts a web site may be the server.
Example: A client (browser) submits an HTTP request to the server;
then the server returns a response to the client. The response
contains status information about the request and may also contain the
requested content.
Two HTTP Request Methods: GET and POST
Two commonly used methods for a request-response between a client and
server are: GET and POST.
GET – Requests data from a specified resource POST – Submits data to
be processed to a specified resource
Here we distinguish the major differences:
Well one major thing is anything you submit over GET is going to be exposed via the URL. Secondly as Ceejayoz says, there is a limit on characters for a URL.
Another difference is that POST generally requires two HTTP operations, whereas GET only requires one.
Edit: I should clarify--for common programming patterns. Generally responding to a POST with a straight up HTML web page is a questionable design for a variety of reasons, one of which is the annoying "you must resubmit this form, do you wish to do so?" on pressing the back button.
As answered by others, there's a limit on url size with get, and files can be submitted with post only.
I'd like to add that one can add things to a database with a get and perform actions with a post. When a script receives a post or a get, it can do whatever the author wants it to do. I believe the lack of understanding comes from the wording the book chose or how you read it.
A script author should use posts to change the database and use get only for retrieval of information.
Scripting languages provided many means with which to access the request. For example, PHP allows the use of $_REQUEST to retrieve either a post or a get. One should avoid this in favor of the more specific $_GET or $_POST.
In web programming, there's a lot more room for interpretation. There's what one should and what one can do, but which one is better is often up for debate. Luckily, in this case, there is no ambiguity. You should use posts to change data, and you should use get to retrieve information.
HTTP Post data doesn't have a specified limit on the amount of data, where as different browsers have different limits for GET's. The RFC 2068 states:
Servers should be cautious about
depending on URI lengths above 255
bytes, because some older client or
proxy implementations may not properly
support these lengths
Specifically you should the right HTTP constructs for what they're used for. HTTP GET's shouldn't have side-effects and can be safely refreshed and stored by HTTP Proxies, etc.
HTTP POST's are used when you want to submit data against a url resource.
A typical example for using HTTP GET is on a Search, i.e. Search?Query=my+query
A typical example for using a HTTP POST is submitting feedback to an online form.
Gorgapor, mod_rewrite still often utilizes GET. It just allows to translate a friendlier URL into a URL with a GET query string.
Simple version of POST GET PUT DELETE
use GET - when you want to get any resource like List of data based on any Id or Name
use POST - when you want to send any data to server. keep in mind POST is heavy weight operation because for updation we should use PUT instead of POST
internally POST will create new resource
use PUT - when you