CakePHP: Problems with session, autoRegenerate, requestCountdown, AJAX - ajax

What I researched elsewhere
an answer in this question explains how to use autoRegenerate and requestCountdown to prolong the session as long as the user is active.
This question has an answer explaining what happens with ajax calls:
If you stay on the same page, JavaScript makes a request, which generates a new session_id, and doesn't record the new session_id.
All subsequent ajax requests use an old session_id, which is declared invalid, and returns an empty session.
Somewhere else it was said that some browsers send another userAgent with ajax requests, and Session.checkAgent has to be set to false if it has to be guaranteed that ajax calls work. but as those ajax calls only fail sometimes I don't think that this is the reason for the problem.
My problem is
I had set requestCountdown to 1, but then I received errors on pages that automatically perform ajax requests when the page is loaded. I increased requestCountdown to 4, which should be enough most of the times. But some users with some browsers receive error messages because one or more of the ajax calls receives a "403 Forbidden" as a response. For the same page, sometimes the error appears and sometimes not.
What I want is if the session length is e.g. 30 minutes and the user opens a page (or triggers an event that causes an ajax call) at let's says minute 29, the session should be prolonged for another 30 minutes.
But I seem to be stuck between two problems:
If the countdown is set to a value greater than 1 and the user happens to visit a page that doesn't contain any ajax requests, the countdown value is decreased only by 1, it doesn't become 0, and the session is not regenerated. E.g. if the countdown is set to 10 the user will have to click 10 times in order to regenerate the session.
If the countdown is set to one, the session will be regenerated with every request, but on some browsers sometimes some ajax calls will fail.
My questions
To assure that I am understanding it correctly: A session can not simply be prolonged, it has to be "regenerated", which implies that the session id is changed?
Maybe this is all conceptually correct but I wonder if I am just missing an additional setting or something to get it to work?
Exemplary request and response headers (from my test machine)
Request
-------
POST /proxies/refreshProxiesList/0 HTTP/1.1
Host: localhost:84
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0
Accept: */*
Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
X-Requested-With: XMLHttpRequest
Referer: http://localhost:84/users/home
Cookie: CakeCookie[lang]=de; CAKEPHP=b4o4ik71rven5478te1e0asjc6
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Content-Length: 0
Response
--------
HTTP/1.1 403 Forbidden
Date: Tue, 18 Feb 2014 10:24:52 GMT
Server: Apache/2.4.4 (Win32) OpenSSL/1.0.1e PHP/5.5.3
X-Powered-By: PHP/5.5.3
Content-Length: 0
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8

CakePHP uses sessions with cookies. It sounds to me like problem is that while the session itself can be prolonged through the timeout option, the session cookie cannot easily be prolonged, so you end up losing your session anyways. The people in that thread are suggesting to refresh the session in order for it to create a new cookie.
You could, as one person suggested, extend the life of the session cookie to be much longer, though the problem will still be there, it'll just be less obvious. Maybe you could write something yourself to resave the session cookie with a new expiration time? ...Though I haven't found mentions of people doing this, so maybe not.
Googling for information about cakephp and session cookie expiration, it seems that this is a known problem CakePHP Session updates but cookie expiry doesn't that people have made workarounds for.

Related

HTTP Cache-Control header works only on localhost

I'm trying to configure caching with Cache-Control for a REST endpoint in my webapp, it works locally, but when I deploy on our production server the browser just won't cache the responses.
The endpoint is queried via a parametrized ajax request (as shown below).
Some relevant notes :
I use a cache buster parameter (_) that is a unix timestamp generated at page load. It doesn't change across ajax requests.
localhost is on HTTP whereas production is on HTTPS. The certificate is valid and there are no related errors.
Both Firefox 59.0.2 and Chrome 66.0.3359.139 exhibit this behavior, so I assume this is something in the configuration.
Localhost
Request URL: http://localhost:8080/webapp/rest/events?_=1525720266960&start=2018-04-29&end=2018-06-10
Request Method: GET
Status Code: 200 OK
Referrer Policy: no-referrer-when-downgrade
=== Request ===
Accept: application/json, text/javascript, */*; q=0.01
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Cookie: JSESSIONID=<token>
Host: localhost:8080
Referer: http://localhost:8080/webapp/
X-Requested-With: XMLHttpRequest
=== Response ===
Cache-Control: no-transform, max-age=300, private
Connection: keep-alive
Content-Length: 5935
Content-Type: application/json
Following requests (for the same parameters) are effectively loaded from cache with the only difference being : Status Code: 200 OK (from disk cache)
Which seems fine, since I don't want to revalidate. The resource should only be fetched again, without validation, once it has gone stale after the duration of max-age specified by Cache-Control.
Production
Request URL: https://www.example.org/webapp/rest/events?_=1525720216575&start=2018-04-29&end=2018-06-10
Request Method: GET
Status Code: 200 OK
Referrer Policy: no-referrer-when-downgrade
=== Request ===
Accept: application/json, text/javascript, */*; q=0.01
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Cookie: JSESSIONID=<token>
Host: www.example.org
Referer: https://www.example.org/webapp/
X-Requested-With: XMLHttpRequest
=== Response ===
Cache-Control: no-transform, max-age=300, private
Connection: close
Content-Length: 5935
Content-Type: application/json
In this case, the response is never loaded from cache afterwards.
I've stripped some headers that I thought superfluous (Server, X-Powered-By, User-Agent, Date).
Question
What prevents the reponses to be cached by the browser when talking to the production server ?
2 days later I try again and caching works properly. (I swear I'm not insane)
Same request, same headers, same response.
I suspect that it falls into some kind of heuristic that overrides the response Cache-Control.
It certainly has to do with the fact that this endpoint did not specify Cache-Control before, so the browser overlooks the header since the heuristic favored refetching instead of caching, it can't go wrong about being more cautious.
RFC2616
13.2.2 Heuristic Expiration
Since origin servers do not always provide explicit expiration times,
HTTP caches typically assign heuristic expiration times, employing
algorithms that use other header values (such as the Last-Modified
time) to estimate a plausible expiration time. The HTTP/1.1
specification does not provide specific algorithms, but does impose
worst-case constraints on their results. Since heuristic expiration
times might compromise semantic transparency, they ought to used
cautiously, and we encourage origin servers to provide explicit
expiration times as much as possible.
All in all, this is the best explanation I have.

Browser serving an obsolete Authorization header from cache

I'm experiencing my client getting logged out after an innocent request to my server. I control both ends and after a lot of debugging, I've found out that the following happens:
The client sends the request with a correct Authorization header.
The server responds with 304 Not Modified without any Authorization header.
The browser serves the full response including an obsolete Authorization header as found in its cache.
From now on, the client uses the obsolete Authorization and gets kicked out.
From what I know, the browser must not cache any request containing Authorization. Nonetheless,
chrome://view-http-cache/http://localhost:10080/api/SearchHost
shows
HTTP/1.1 200 OK
Date: Thu, 23 Nov 2017 23:50:16 GMT
Vary: origin, accept-encoding, authorization, x-role
Cache-Control: must-revalidate
Server: 171123_073418-d8d7cb0 =
x-delay-seconds: 3
Authorization: Wl6pPirDLQqWqYv
Expires: Thu, 01 Jan 1970 00:00:00 GMT
ETag: "zUxy1pv3CQ3IYTFlBg3Z3vYovg3zSw2L"
Content-Encoding: gzip
Content-Type: application/json;charset=utf-8
Content-Length: 255
The funny server header replaces the Jetty server header (which shouldn't be served for security reasons) by some internal information - ignore that. This is what curl says:
< HTTP/1.1 304 Not Modified
< Date: Thu, 23 Nov 2017 23:58:18 GMT
< Vary: origin, accept-encoding, authorization, x-role
< Cache-Control: must-revalidate
< Server: 171123_073418-d8d7cb0 =
< ETag: "zUxy1pv3CQ3IYTFlBg3Z3vYovg3zSw2L"
< x-delay-seconds: 3
< Content-Encoding: gzip
This happens in Firefox, too, although I can't reproduce it at the moment.
The RFC continues, and it looks like the answer linked above is not exact:
unless a cache directive that allows such responses to be stored is present in the response
It looks like the response is cacheable. That's fine, I do want the content to be cached, but I don't want the Authorization header to be served from cache. Is this possible?
Explanation of my problem
My server used to send the Authorization header only when responding to a login request. This used to work fine, problems come with new requirements.
Our site allows users to stay logged in arbitrarily long (we do no sensitive business). We're changing the format of the authorization token and we don't want to force all users to log in again because of this. Therefore, I made the server to send the updated authorization token whenever it sees an obsolete but valid one. So now any response may contain an authorization token, but most of them do not.
The browser cache combining the still valid response with an obsolete authorization token comes in the way.
As a workaround, I made the server send no etag when an authorization token is present. It works, but I'd prefer some cleaner solution.
The quote in the linked answer is misleading because it omitted an important part: "if the cache is shared".
Here's the correct quote (RFC7234 Section 3):
A cache MUST NOT store a response to any request, unless: ... the Authorization header field (see Section 4.2 of [RFC7235]) does not appear in the request, if the cache is shared,
That part of the RFC is basically a summary.
This is the complete rule (RFC7234 Section 3.2) that says essentially the same thing:
A shared cache MUST NOT use a cached response to a request with an Authorization header field (Section 4.2 of [RFC7235]) to satisfy any subsequent request unless a cache directive that allows such responses to be stored is present in the response.
Is a browser cache a shared cache?
This is explained in Introduction section of the RFC:
A private cache, in contrast, is dedicated to a single user; often, they are deployed as a component of a user agent.
That means a browser cache is private cache.
It is not a shared cache, so the above rule does not apply, which means both Chrome and Firefox do their jobs correctly.
Now the solution.
The specification suggests the possibility of a cached response containing Authorization to be reused without the Authorization header.
Unfortunately, it also says that the feature is not widely implemented.
So, the easiest and also the most future-proof solution I can think of is make sure that any response containing Authorization token isn't cached.
For instance, whenever the server sees an obsolete but valid Authorization token, send a new valid one along with Cache-Control: no-store to disallow caching.
Also you must never send Cache-Control: must-revalidate with Authorization header because the must-revalidate directive actually allows the response to be cached, including by shared caches which can cause even more problems in the future.
... unless a cache directive that allows such responses to be stored is present in the response.
In this specification, the following Cache-Control response directives (Section 5.2.2) have such an effect: must-revalidate, public, and s-maxage.
My current solution is to send an authorization header in every response; using a placeholder value of - when no authorization is wanted.
The placeholder value is obviously meaningless and the client knows it and happily ignores it.
This solution is ugly as it adds maybe 20 bytes to every response, but that's still better than occasionally having to resend a whole response content as with the approach mentioned in my question. Moreover, with HTTP/2 it'll be free.

Redirect from a Servlet filter, when an AJAX request is made [duplicate]

While trying to redirect user to a URL, it works with GET requests but not with postback requests.
Through firebug's Network window, I can see the redirect response received by browser after the postback request (that should cause redirect) completes. The browser seemingly initiates a GET request for the redirect URL but doesn't actually successfully redirect. It remains there on the same page.
 I use JSF server side. The initiated GET request is not received at all by the server. However initiated by the browser on server's demand. I guess problem is somewhere client side only 
Can anyone please explain how to make redirect work successfully ? Let me know incase I should provide any more information.
Edit:
Request header for redirect:
GET /Px10Application/welcome.xhtml HTTP/1.1
Host: localhost:8080
User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20100101 Firefox/20.0
Accept: application/xml, text/xml, */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Referer: http://localhost:8080/Px10Application/channelPages.xhtml?channelId=-3412&type=Group
X-Requested-With: XMLHttpRequest
Faces-Request: partial/ajax
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Cookie: hb8=wq::db6a8873-f1dc-4dcc-a784-4514ee9ef83b; JSESSIONID=d40337b14ad665f4ec02f102bb41; oam.Flash.RENDERMAP.TOKEN=-1258fu7hp9
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Response header for redirect:
HTTP/1.1 200 OK
X-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 3.1 Java/Sun Microsystems Inc./1.6)
Server: GlassFish Server Open Source Edition 3.1
Set-Cookie: oam.Flash.RENDERMAP.TOKEN=-1258fu7hp8; Path=/Px10Application
Pragma: no-cache
Cache-Control: no-cache
Expires: -1
Content-Type: text/xml;charset=UTF-8
Content-Length: 262
Date: Wed, 22 May 2013 17:18:56 GMT
X-Requested-With: XMLHttpRequest
Faces-Request: partial/ajax
You're thus attempting to send a redirect on a JSF ajax request using "plain vanilla" Servlet API's HttpServletResponse#sendRedirect(). This is not right. The XMLHttpRequest does not treat a 302 response as a new window.location, but just as a new ajax request. However as you're returning a complete plain vanilla HTML page as ajax response instead of a predefined XML document with instructions which HTML parts to update, the JSF ajax engine has no clues what to do with the response of the redirected ajax request. You end up with a JS error (didn't you see it in the JS console?) and no form of visual feedback if you don't have the jsf.ajax.onError() handler configured.
In order to instruct the JSF ajax engine to change the window.location, you need to return a special XML response. If you have used ExternalContext#redirect() instead, then it would have taken place fully transparently.
externalContext.redirect(redirectURL);
However, if you're not inside JSF context, e.g. in a servlet filter or so, and thus don't have the FacesContext at hands, then you should be manually creating and returning the special XML response.
if ("partial/ajax".equals(request.getHeader("Faces-Request"))) {
response.setContentType("text/xml");
response.getWriter()
.append("<?xml version=\"1.0\" encoding=\"UTF-8\"?>")
.printf("<partial-response><redirect url=\"%s\"></redirect></partial-response>", redirectURL);
} else {
response.sendRedirect(redirectURL);
}
If you happen to use JSF utility library OmniFaces, then you can also use Servlets#facesRedirect() for the job:
Servlets.facesRedirect(request, response, redirectURL);
See also:
Authorization redirect on session expiration does not work on submitting a JSF form, page stays the same
JSF Filter not redirecting After Initial Redirect

Does if-no-match need to be set programmatically in ajax request, if server sends Etag

My question is pretty simple. Although while searching over, I have not found a simple satisfying answer.
I am using Jquery ajax request to get the data from a server. Server
hosts a rest API that sets the Etag and Cach-control headers to the GET requests. The Server also sets CORS headers to allow the Etag.
The client of the Api is a browser web app. I am using Ajax request to call the Api. Here are the response headers from server after a simple GET request:
Status Code: 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-transform, max-age=86400
Connection: Keep-Alive
Content-Encoding: gzip
Content-Type: application/json
Date: Sun, 30 Aug 2015 13:23:41 GMT
Etag: "-783704964"
Keep-Alive: timeout=15, max=99
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked
Vary: Accept-Encoding
access-control-allow-headers: X-Requested-With, Content-Type, Etag,Authorization
access-control-allow-methods: GET, POST, DELETE, PUT
All I want to know is:
Do I need to manually collect the Etag from response headers sent from the server and attach an if-no-match header to ajax request?OR the Browser sends it by-default in a conditional get request when it has an 'Etag'
I have done debugging over the network console in the browser and It
seems the browser is doing the conditional GET automatically and
sets the if-no-match header.
if it is right, Suppose, I created a new resource, and then I called the get request. It gives me the past cached data for the first time. But when I reload the page, It gives the updated one. So I am confused that, If the dataset on the server-side has changed and it sends a different Etag, Why doesn't the browser get an updated data set from the server unless I have to reload
Also in case of pagination. Suppose I have a URL /users?next=0. next is a query param where the value for the next changes for every new request. Since each response will get its own 'Etag'. Will the browser store the 'Etag' based on request or it just stores the lastest Etag of the previous get request, irrespective of the URL.
Well, I have somehow figured out the solution myself:
The browser sends the if-no-match header itself when it sees url had the e-tag header on a previous request. Browser saves the e-tag with respect to that URL, so it does not matter how many requests with different URLs happen.
Also, a trick to force the browser to fetch a conditional-get to check the e-tag:
Set the max-age header to the lowest (for me 60s works great)
once the cache expires, thebrowser will send a conditional-get to check if the expired cached resource is valid. If the if-no-match header matches with e-tag. The server sends the response back with 304: Not-Modified header. This means the expired cached resource is valid and can be used.

IE caching issue, EVEN if correctly configured

I make a GET request to www.mysite.com and in the header I receive:
Cache-Control no-cache, no-store, must-revalidate
Expires Thu, 01 Jan 1970 00:00:00 GMT
Pragma no-cache
Otherwise, I receive notification that some visitors do not see my site but an error occured last week and now fixed.
Seems the error has been cached in their browsers and it is not doing a new request.
Can this be possible?
I believe browsers (including IE) won't cache whole pages, and that DOM is reloaded after every request. This data is contained in the request after all.
Therefore, your visitors may still experience errors. Does your fix depends on resources like JS or CSS? In this case you may have caching issues.

Resources