Browser not sending cookies for cross-origin request using Fetch API (Firefox) - firefox

On a given domain (https://example.com) I have set cookies using SameSite=None; Secure; HttpOnly
On another site (https://example.org) I am making a fetch request:
fetch('https://example.com/secure/', {method: 'GET', credentials: 'include'})
From what I can see looking in Firefox's network tab there's no pre-flight OPTIONS request being sent. The GET request contains a Host header, Origin header but no Cookies header. The response contains access-control-allow-credentials: true, access-control-allow-origin: https://example.org and vary: origin, true
The response from example.com is returning a 401 due to no cookie being sent.
I can't figure out how to make the browser send cookies (I'm also not certain this is isolated to Firefox).

Related

Can Access-Control-Request-Headers be disabled in Axios?

I am wondering if this header is added by Axios or if it is added by the browser... to see if I can disable it in the client or must add support for it in the server.
I added a custom token x-access-token and I'm having CORS issues.
Adding to Jon's comment: The Access-Control-Request-Headers header is added by the browser: When the client application desires to make a request (via fetch or XMLHttpRequest) that includes a "non-standard" header like x-access-token, the browser first makes a preflight request with a header like Access-Control-Request-Headers: x-access-token, and only if the preflight response contains Access-Control-Allow-Headers: x-access-token will it make the desired request.
It is your server's duty to produce the correct preflight response.

should we close the connection of a pre-flight Cors request while sending response?

As I know that if cors request comes with some extra headers set, first server needs to process it.
With CORS, the server must send the Access-Control-Allow-Headers header to allow uncommon request headers from the client.
Access-Control-Allow-Headers ... - Comma-delimited list of the supported request headers.
e.g suppose my pre-flight request is
OPTIONS /cors HTTP/1.1
Origin: http://api.bob.com
Access-Control-Request-Method: PUT
Access-Control-Request-Headers: X-Custom-Header
Host: api.alice.com
Accept-Language: en-US
Connection: keep-alive
User-Agent: Mozilla/5.0...
Then from server-side I will send response
Access-Control-Allow-Origin: http://api.bob.com
Access-Control-Allow-Methods: GET, POST, PUT
Access-Control-Allow-Headers: X-Custom-Header
Content-Type: text/html; charset=utf-8
My question is -
should I close the connection on server side while we send pre-flight response to client?
One more thing how can I cached pre-flight request for all other distinct subsequent requests?
Thanks
You could cache the OPTIONS request using the
Access-Control-Max-Age
header.
Attach it to the headers collection of the OPTIONS response.
But nevertheless an initial OPTIONS request by the user agent (browser) has to be made, you cannot avoid this.
But all further OPTIONS requests are cached and not issued to the server.
No need to close the connection.
Access-Control-Allow-Origin: http://hello-world.example
Access-Control-Max-Age: 3628800
Access-Control-Allow-Methods: PUT
as explained here, search for
could have the following headers specified
to get to the designated text section.

Does if-no-match need to be set programmatically in ajax request, if server sends Etag

My question is pretty simple. Although while searching over, I have not found a simple satisfying answer.
I am using Jquery ajax request to get the data from a server. Server
hosts a rest API that sets the Etag and Cach-control headers to the GET requests. The Server also sets CORS headers to allow the Etag.
The client of the Api is a browser web app. I am using Ajax request to call the Api. Here are the response headers from server after a simple GET request:
Status Code: 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-transform, max-age=86400
Connection: Keep-Alive
Content-Encoding: gzip
Content-Type: application/json
Date: Sun, 30 Aug 2015 13:23:41 GMT
Etag: "-783704964"
Keep-Alive: timeout=15, max=99
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked
Vary: Accept-Encoding
access-control-allow-headers: X-Requested-With, Content-Type, Etag,Authorization
access-control-allow-methods: GET, POST, DELETE, PUT
All I want to know is:
Do I need to manually collect the Etag from response headers sent from the server and attach an if-no-match header to ajax request?OR the Browser sends it by-default in a conditional get request when it has an 'Etag'
I have done debugging over the network console in the browser and It
seems the browser is doing the conditional GET automatically and
sets the if-no-match header.
if it is right, Suppose, I created a new resource, and then I called the get request. It gives me the past cached data for the first time. But when I reload the page, It gives the updated one. So I am confused that, If the dataset on the server-side has changed and it sends a different Etag, Why doesn't the browser get an updated data set from the server unless I have to reload
Also in case of pagination. Suppose I have a URL /users?next=0. next is a query param where the value for the next changes for every new request. Since each response will get its own 'Etag'. Will the browser store the 'Etag' based on request or it just stores the lastest Etag of the previous get request, irrespective of the URL.
Well, I have somehow figured out the solution myself:
The browser sends the if-no-match header itself when it sees url had the e-tag header on a previous request. Browser saves the e-tag with respect to that URL, so it does not matter how many requests with different URLs happen.
Also, a trick to force the browser to fetch a conditional-get to check the e-tag:
Set the max-age header to the lowest (for me 60s works great)
once the cache expires, thebrowser will send a conditional-get to check if the expired cached resource is valid. If the if-no-match header matches with e-tag. The server sends the response back with 304: Not-Modified header. This means the expired cached resource is valid and can be used.

Why is the browser not setting cookies after an AJAX request returns?

I am making an ajax request using $.ajax. The response has the Set-Cookie header set (I've verified this in the Chrome dev tools). However, the browser does not set the cookie after receiving the response! When I navigate to another page within my domain, the cookie is not sent. (Note: I'm not doing any cross-domain ajax requests; the request is in the same domain as the document.)
What am I missing?
EDIT: Here is the code for my ajax request:
$.post('/user/login', JSON.stringify(data));
Here is the request, as shown by the Chrome dev tools:
Request URL:https://192.168.1.154:3000/user/login
Request Method:POST
Status Code:200 OK
Request Headers:
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Content-Length:35
Content-Type:application/x-www-form-urlencoded; charset=UTF-8
DNT:1
Host:192.168.1.154:3000
Origin:https://192.168.1.154:3000
Referer:https://192.168.1.154:3000/
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.154 Safari/537.36
X-Requested-With:XMLHttpRequest
Form Data:
{"UserId":"blah","Password":"blah"}:
Response:
Response Headers:
Content-Length:15
Content-Type:application/json; charset=UTF-8
Date:Sun, 16 Mar 2014 03:25:24 GMT
Set-Cookie:SessionId=MTM5NDk0MDMyNHxEdi1CQkFFQ180SUFBUkFCRUFBQVRfLUNBQUVHYzNSeWFXNW5EQXNBQ1ZObGMzTnBiMjVKWkFaemRISnBibWNNTGdBc1ZFcDNlU3RKVFdKSGIzQlNXRkkwVjJGNFJ6TlRVSHA0U0ZJd01XRktjMDF1Y1c1b2FGWXJORzV4V1QwPXwWf1tz-2Fy_Y4I6fypCzkMJyYxhgM3LjVHGAlKyrilRg==; HttpOnly
OK, so I finally figured out the problem. It turns out that setting the Path option is important when sending cookies in an AJAX request. If you set Path=/, e.g.:
Set-Cookie:SessionId=foo; Path=/; HttpOnly
...then the browser will set the cookie when you navigate to a different page. Without setting Path, the browser uses the "default" path. Apparently, the default path for a cookie set by an AJAX request is different from the default path used when you navigate to a page directly. I'm using Go/Martini, so on the server-side I do this:
session.Options(session.Options{HttpOnly: true, Path:"/"})
I'd guess that Python/Ruby/etc. have a similar mechanism for setting Path.
See also: cookies problem in PHP and AJAX
#atomkirk's answer didn't apply to me because
I don't use the fetch API
I was making cross-site requests (i.e. CORS)
NOTE: If your server is using Access-Control-Allow-Origins:* (aka "all origins"/"wildcard origins"), you may not be able to send credentials (see below).
As for the fetch API; CORS requests will need {credentials:'include'} for both sending & receiving cookies
For CORS requests, use the "include" value to allow sending
credentials to other domains:
fetch('https://example.com:1234/users', {
credentials: 'include'
})
... To opt into accepting cookies from the server, you must use the credentials option.
{credentials:'include'} just sets xhr.withCredentials=true
Check fetch code
if (request.credentials === 'include') {
xhr.withCredentials = true
}
So plain Javascript/XHR.withCredentials is the important part.
If you're using jQuery, you can set withCredentials (remember to use crossDomain: true) using $.ajaxSetup(...)
$.ajaxSetup({
crossDomain: true,
xhrFields: {
withCredentials: true
}
});
If you're using AngularJS, the $http service config arg accepts a withCredentials property:
$http({
withCredentials: true
});
If you're using Angular (Angular IO), the common.http.HttpRequest service options arg accepts a withCredentials property:
this.http.post<Hero>(this.heroesUrl, hero, {
withCredentials: true
});
As for the request, when xhr.withCredentials=true; the Cookie header is sent
Before I changed xhr.withCredentials=true
I could see Set-Cookie name & value in the response, but Chrome's "Application" tab in the Developer Tools showed me the name and an empty value
Subsequent requests did not send a Cookie request header.
After the change xhr.withCredentials=true
I could see the cookie's name and the cookie's value in the Chrome's "Application" tab (a value consistent with the Set-Cookie header).
Subsequent requests did send a Cookie request header with the same value, so my server treated me as "authenticated"
As for the response: the server may need certain Access-Control... headers
For example, I configured my server to return these headers:
Access-Control-Allow-Credentials:true
Access-Control-Allow-Origin:https://{your-origin}:{your-port}
EDIT: this approach won't work if you allow all origins/wildcard origins, as described here (thanks to #ChandanBhattad) :
The CORS request was attempted with the credentials flag set, but the server is configured using the wildcard ("*") as the value of Access-Control-Allow-Origin, which doesn't allow the use of credentials.
Until I made this server-side change to the response headers, Chrome logged errors in the console like
Failed to load https://{saml-domain}/saml-authn: Redirect from https://{saml-domain}/saml-redirect has been blocked by CORS policy:
The value of the 'Access-Control-Allow-Credentials' header in the response is '' which must be 'true' when the request's credentials mode is 'include'. Origin https://{your-domain} is therefore not allowed access.
The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
After making this Access-* header change, Chrome did not log errors; the browser let me check the authenticated responses for all subsequent requests.
If you're using the new fetch API, you can try including credentials:
fetch('/users', {
credentials: 'same-origin'
})
That's what fixed it for me.
In particular, using the polyfill: https://github.com/github/fetch#sending-cookies
This may help somebody randomly falling across this question.
I found forcing a URL with https:// rather than http:// even though the server hasn't got a certificate and Chrome complains will fix this issue.
In my case, the cookie size exceeded 4096 bytes (Google Chrome). I had a dynamic cookie payload that would increase in size.
Browsers will ignore the set-cookie response header if the cookie exceeds the browsers limit, and it will not set the cookie.
See here for cookie size limits per browser.
I know this isn't the solution, but this was my issue, and I hope it helps someone :)

Can Firefox be made to accept third-party cookies from an AJAX response header?

I'm writing some code that makes an AJAX request to our web server. Our server runs some logic and then responds with some JSON. It may also respond with a set-cookie header:
Set-Cookie: our_organisation=[uuid]; domain=.our_organisation.com; path=/; expires=[soon]
It works in Chrome and Safari as far as I can tell, but not in Firefox. Firefox will accept the cookie if it's an image request instead. Am I doing something wrong here?
I already had a problem where I couldn't read the AJAX response on the client side in Firefox; this was fixed by setting Access-Control-Allow-Origin: * in the response header.
This is a cross-site XMLHttpRequest?
If so, per http://dev.w3.org/2006/webapi/XMLHttpRequest-2/ withCredentials defaults to false so the "credentials flag" used for CORS is set to false, and then per http://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html the "block cookies" flag is set during the HTTP get, and per http://www.whatwg.org/specs/web-apps/current-work/multipage/fetching-resources.html#fetch that means Set-Cookie headers are ignored. Sounds like Chrome and Safari are just not following the specs here.
You can set withCredentials = true on the XHR object to send cookies. But note that if you do that you have to list an actual origin in Access-Control-Allow-Origin; you can't just use *.

Resources