Since the Marvel api
must pass hash and time stamp as parameters, then the url actually will change at every request. Just like this: https://gateway.marvel.com/v1/public/comics?apikey=xxxxx&hash=xxxxx&ts=xxxx
And Okhttp's cache will work based on the same url, otherwise etag won't be useful.
Is there a solution to this?
Your best option is to bring it up with the API’s designers and explain why it’s cache-hostile. Browsers won’t cache these successfully, for example.
Once you’ve done that, you can use a network interceptor to add the cache-breaking query parameters to your outbound requests. That way requests that don’t need the network have cacheable URLs.
Cross-origin resource sharing is a mechanism that allows a web page to make XMLHttpRequests to another domain (from Wikipedia).
I've been fiddling with CORS for the last couple of days and I think I have a pretty good understanding of how everything works.
So my question is not about how CORS / preflight work, it's about the reason behind coming up with preflights as a new request type. I fail to see any reason why server A needs to send a preflight (PR) to server B just to find out if the real request (RR) will be accepted or not - it would certainly be possible for B to accept/reject RR without any prior PR.
After searching quite a bit I found this piece of information at www.w3.org (7.1.5):
To protect resources against cross-origin requests that could not originate from certain user agents before this specification existed a
preflight request is made to ensure that the resource is aware of this
specification.
I find this is the hardest to understand sentence ever. My interpretation (should better call it 'best guess') is that it's about protecting server B against requests from server C that is not aware of the spec.
Can someone please explain a scenario / show a problem that PR + RR solves better than RR alone?
I spent some time being confused as to the purpose of the preflight request but I think I've got it now.
The key insight is that preflight requests are not a security thing. Rather, they're a not-changing-the-rules thing.
Preflight requests have nothing to do with security, and they have no bearing on applications that are being developed now, with an awareness of CORS. Rather, the preflight mechanism benefits servers that were developed without an awareness of CORS, and it functions as a sanity check between the client and the server that they are both CORS-aware. The developers of CORS felt that there were enough servers out there that were relying on the assumption that they would never receive, e.g. a cross-domain DELETE request that they invented the preflight mechanism to allow both sides to opt-in. They felt that the alternative, which would have been to simply enable the cross-domain calls, would have broken too many existing applications.
There are three scenarios here:
Old servers, no longer under development, and developed before CORS. These servers may make assumptions that they'll never receive e.g. a cross-domain DELETE request. This scenario is the primary beneficiary of the preflight mechanism. Yes these services could already be abused by a malicious or non-conforming user agent (and CORS does nothing to change this), but in a world with CORS the preflight mechanism provides an extra 'sanity check' so that clients and servers don't break because the underlying rules of the web have changed.
Servers that are still under development, but which contain a lot of old code and for which it's not feasible/desirable to audit all the old code to make sure it works properly in a cross-domain world. This scenario allows servers to progressively opt-in to CORS, e.g. by saying "Now I'll allow this particular header", "Now I'll allow this particular HTTP verb", "Now I'll allow cookies/auth information to be sent", etc. This scenario benefits from the preflight mechanism.
New servers that are written with an awareness of CORS. According to standard security practices, the server has to protect its resources in the face of any incoming request -- servers can't trust clients to not do malicious things. This scenario doesn't benefit from the preflight mechanism: the preflight mechanism brings no additional security to a server that has properly protected its resources.
What was the motivation behind introducing preflight requests?
Preflight requests were introduced so that a browser could be sure it was dealing with a CORS-aware server before sending certain requests. Those requests were defined to be those that were both potentially dangerous (state-changing) and new (not possible before CORS due to the Same Origin Policy). Using preflight requests means that servers must opt-in (by responding properly to the preflight) to the new, potentially dangerous types of request that CORS makes possible.
That's the meaning of this part of the original specification: "To protect resources against cross-origin requests that could not originate from certain user agents before this specification existed a preflight request is made to ensure that the resource is aware of this specification."
Can you give me an example?
Let's imagine that a browser user is logged into their banking site at A.com. When they navigate to the malicious B.com, that page includes some Javascript that tries to send a DELETE request to A.com/account. Since the user is logged into A.com, that request, if sent, would include cookies that identify the user.
Before CORS, the browser's Same Origin Policy would have blocked it from sending this request. But since the purpose of CORS is to make just this kind of cross-origin communication possible, that's no longer appropriate.
The browser could simply send the DELETE and let the server decide how to handle it. But what if A.com isn't aware of the CORS protocol? It might go ahead and execute the dangerous DELETE. It might have assumed that—due to the browser's Same Origin Policy—it could never receive such a request, and thus it might have never been hardened against such an attack.
To protect such non-CORS-aware servers, then, the protocol requires the browser to first send a preflight request. This new kind of request is something that only CORS-aware servers can respond to properly, allowing the browser to know whether or not it's safe to send the actual DELETE.
Why all this fuss about the browser, can't the attacker just send a DELETE request from their own computer?
Sure, but such a request won't include the user's cookies. The attack that this is designed to prevent relies on the fact that the browser will send cookies (in particular, authentication information for the user) for the other domain along with the request.
That sounds like Cross-Site Request Forgery, where a form on site B.com can be submitted to A.com with the user's cookies and do damage.
That's right. Another way of putting this is that preflight requests were created so as to not increase the CSRF attack surface for non-CORS-aware servers.
But POST is listed as a method that doesn't require preflights. That can change state and delete data just like a DELETE!
That's true! CORS does not protect your site from CSRF attacks. Then again, without CORS you are also not protected from CSRF attacks. The purpose of preflight requests is just to limit your CSRF exposure to what already existed in the pre-CORS world.
Sigh. OK, I grudgingly accept the need for preflight requests. But why do we have to do it for every resource (URL) on the server? The server either handles CORS or it doesn't.
Are you sure about that? It's not uncommon for multiple servers to handle requests for a single domain. For example, it may be the case that requests to A.com/url1 are handled by one kind of server and requests to A.com/url2 are handled by a different kind of server. It's not generally the case that the server handling a single resource can make security guarantees about all resources on that domain.
Fine. Let's compromise. Let's create a new CORS header that allows the server to state exactly which resources it can speak for, so that additional preflight requests to those URLs can be avoided.
Good idea! In fact, the header Access-Control-Policy-Path was proposed for just this purpose. Ultimately, though, it was left out of the specification, apparently because some servers incorrectly implemented the URI specification in such a way that requests to paths that seemed safe to the browser would not in fact be safe on the broken servers.
Was this a prudent decision that prioritized security over performance, allowing browsers to immediately implement the CORS specification without putting existing servers at risk? Or was it shortsighted to doom the internet to wasted bandwidth and doubled latency just to accommodate bugs in a particular server at a particular time?
Opinions differ.
Well, at the very least browsers will cache the preflight for a single URL?
Yes. Though probably not for very long. In WebKit browsers the maximum preflight cache time is currently 10 minutes.
Sigh. Well, if I know that my servers are CORS-aware, and therefore don't need the protection offered by preflight requests, is there any way for me to avoid them?
Your only real option is to make sure that your requests use CORS-safe methods and headers. That might mean leaving out custom headers that you would otherwise include (like X-Requested-With), changing the Content-Type, or more.
Whatever you do, you must make sure that you have proper CSRF protections in place, since CORS will not block all unsafe requests. As the original specification puts it: "resources for which simple requests have significance other than retrieval must protect themselves from Cross-Site Request Forgery".
Consider the world of cross-domain requests before CORS. You could do a standard form POST, or use a script or an image tag to issue a GET request. You couldn't make any other request type other than GET/POST, and you couldn't issue any custom headers on these requests.
With the advent of CORS, the spec authors were faced with the challenge of introducing a new cross-domain mechanism without breaking the existing semantics of the web. They chose to do this by giving servers a way to opt-in to any new request type. This opt-in is the preflight request.
So GET/POST requests without any custom headers don't need a preflight, since these requests were already possible before CORS. But any request with custom headers, or PUT/DELETE requests, do need a preflight, since these are new to the CORS spec. If the server knows nothing about CORS, it will reply without any CORS-specific headers, and the actual request will not be made.
Without the preflight request, servers could begin seeing unexpected requests from browsers. This could lead to a security issue if the servers weren't prepared for these types of requests. The CORS preflight allows cross-domain requests to be introduced to the web in a safe manner.
CORS allows you to specify more headers and method types than was previously possible with cross-origin <img src> or <form action>.
Some servers could have been (poorly) protected with the assumption that a browser cannot make, e.g. cross-origin DELETE request or cross-origin request with X-Requested-With header, so such requests are "trusted".
To make sure that server really-really supports CORS and not just happens to respond to random requests, the preflight is executed.
I feel that the other answers aren't focusing on the reason pre-fight enhances security.
Scenarios:
1) With pre-flight. An attacker forges a request from site dummy-forums.com while the user is authenticated to safe-bank.com
If the Server does not check for the origin, and somehow has a flaw, the browser will issue a pre-flight request, OPTION method. The server knows none of that CORS that the browser is expecting as a response so the browser will not proceed (no harm whatsoever)
2) Without pre-flight. An attacker forges the request under the same scenario as above, the browser will issue the POST or PUT request right away, the server accepts it and might process it, this will potentially cause some harm.
If the attacker sends a request directly, cross origin, from some random host it's most likely one is thinking about a request with no authentication. That's a forged request, but not a xsrf one. so the server has will check credentials and fail.
CORS doesn't attempt to prevent an attacker who has the credentials to issue requests, although a whitelist could help reduce this vector of attack.
The pre-flight mechanism adds safety and consistency between clients and servers.
I don't know if this is worth the extra handshake for every request since caching is hardy use-able there, but that's how it works.
Here's another way of looking at it, using code:
<!-- hypothetical exploit on evil.com -->
<!-- Targeting banking-website.example.com, which authenticates with a cookie -->
<script>
jQuery.ajax({
method: "POST",
url: "https://banking-website.example.com",
data: JSON.stringify({
sendMoneyTo: "Dr Evil",
amount: 1000000
}),
contentType: "application/json",
dataType: "json"
});
</script>
Pre-CORS, the exploit attempt above would fail because it violates the same-origin policy. An API designed this way did not need XSRF protection, because it was protected by the browser's native security model. It was impossible for a pre-CORS browser to generate a cross-origin JSON POST.
Now CORS comes on the scene – if opting-in to CORS via pre-flight was not required, suddenly this site would have a huge vulnerability, through no fault of their own.
To explain why some requests are allowed to skip the pre-flight, this is answered by the spec:
A simple cross-origin request has been defined as congruent with those
which may be generated by currently deployed user agents that do not
conform to this specification.
To untangle that, GET is not pre-flighted because it is a "simple method" as defined by 7.1.5. (The headers must also be "simple" in order to avoid the pre-flight).
The justification for this is that "simple" cross-origin GET request could already be performed by e.g. <script src=""> (this is how JSONP works). Since any element with a src attribute can trigger a cross-origin GET, with no pre-flight, there would be no security benefit to requiring pre-fight on "simple" XHRs.
Additionally, for HTTP request methods that can cause side-effects on
user data (in particular, for HTTP methods other than GET, or for POST
usage with certain MIME types), the specification mandates that
browsers "preflight" the request
Source
In a browser supporting CORS, reading requests (like GET) are already protected by the same-origin policy: A malicious website trying to make an authenticated cross-domain request (for example to the victim's internet banking website or router's configuration interface) will not be able to read the returned data because the bank or the router doesn't set the Access-Control-Allow-Origin header.
However, with writing requests (like POST) the damage is done when the request arrives at the webserver.* A webserver could check the Origin header to determine if the request is legit, but this check is often not implemented because either the webserver has no need for CORS or the webserver is older than CORS and is therefore assuming that cross-domain POSTs are completely forbidden by the same-origin policy.
That is why webservers are given the chance to opt-in into receiving cross-domain write requests.
* Essentially the AJAX version of CSRF.
Aren't the preflighted requests about Performance? With the preflighted requests a client can quickly know if the operation is allowed before send a large amount of data, e.g., in JSON with PUT method. Or before travel sensitive data in authentication headers over the wire.
The fact of PUT, DELETE, and other methods, besides custom headers, aren't allowed by default(They need explicit permission with "Access-Control-Request-Methods" and "Access-Control-Request-Headers"), that sounds just like a double-check, because these operations could have more implications to the user data, instead GET requests.
So, it sounds like:
"I saw that you allow cross-site requests from http://foo.example, BUT are you SURE that you'll allow DELETE requests? Did you consider the impacts that these requests might cause in the user data?"
I didn't understand the cited correlation between the preflighted requests and the old servers benefits. A Web Service that was implemented before CORS, or without a CORS awareness, will never receive ANY cross-site request, because first their response won't have the "Access-Control-Allow-Origin" header.
My understanding is a browser sends a conditional get if it is not sure if the compoonent it has is up to date. The question is what defines "not sure". I presume it varys on browser and maybe other conditions. I also presume it's not something you can control, i.e. I can do anything to make browser change the not sure criteria. I can't set something in the way I can set an expires header to what I want on a Http server. Is this correct?
Note:P if you can answer this question with just areally good link that's fine. I couldn't find one.
The HTTP has an expiration model. It defines how servers can specify their responses to expire, and how the age and freshness of a response can be determined by caches. Additionally to that, there are further Cache-Control directives that can modify the behavior for how responses are to be handled dependent or independent of their freshness.
To conclude, HTTP caching is quite complex and the actual behavior depends on multiple factors:
The cache-control directives can be broken down into these general categories:
Restrictions on what are cacheable; these may only be imposed by the origin server.
Restrictions on what may be stored by a cache; these may be imposed by either the origin server or the user agent.
Modifications of the basic expiration mechanism; these may be imposed by either the origin server or the user agent.
Controls over cache revalidation and reload; these may only be imposed by a user agent.
Control over transformation of entities.
But in the end, it all depends on the user agent’s obedience of these rules.
cache-control:no-cache;
cache-control:max-age:0;
cache-control:no-store;
are they different from browser to browser. What should then be considered a standard?
no-cache
If the no-cache directive does not specify a field-name, then a cache MUST NOT use the response to satisfy a subsequent request without successful revalidation with the origin server. This allows an origin server to prevent caching even by caches that have been configured to return stale responses to client requests.
If the no-cache directive does specify one or more field-names, then a cache MAY use the response to satisfy a subsequent request, subject to any other restrictions on caching. However, the specified field-name(s) MUST NOT be sent in the response to a subsequent request without successful revalidation with the origin server. This allows an origin server to prevent the re-use of certain header fields in a response, while still allowing caching of the rest of the response.
max-age
Indicates that the client is willing to accept a response whose age is no greater than the specified time in seconds. Unless max- stale directive is also included, the client is not willing to accept a stale response.
no-store
The purpose of the no-store directive is to prevent the inadvertent release or retention of sensitive information (for example, on backup tapes). The no-store directive applies to the entire message, and MAY be sent either in a response or in a request. If sent in a request, a cache MUST NOT store any part of either this request or any response to it. If sent in a response, a cache MUST NOT store any part of either this response or the request that elicited it. This directive applies to both non- shared and shared caches. "MUST NOT store" in this context means that the cache MUST NOT intentionally store the information in non-volatile storage, and MUST make a best-effort attempt to remove the information from volatile storage as promptly as possible after forwarding it.
Even when this directive is associated with a response, users might explicitly store such a response outside of the caching system (e.g., with a "Save As" dialog). History buffers MAY store such responses as part of their normal operation.
The purpose of this directive is to meet the stated requirements of certain users and service authors who are concerned about accidental releases of information via unanticipated accesses to cache data structures. While the use of this directive might improve privacy in some cases, we caution that it is NOT in any way a reliable or sufficient mechanism for ensuring privacy. In particular, malicious or compromised caches might not recognize or obey this directive, and communications networks might be vulnerable to eavesdropping.
more information # http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.2