How to set nginx cache headers to never expire? - caching

Right now I'm using this:
location ~* \.(js|css)$ { # |png|jpg|jpeg|gif|ico
expires max;
#log_not_found off; # what's this for?
}
And this is what I see in firebug:
Did it work? If I didn't get it wrong, my browser is asking for the file again, and nginx is answering 'not modified', so my browser uses the cache. But I thought the browser shouldn't even ask for the file, it already knows it will never expire.
Any thoughts?

Do not use F5 to reload the page. Use click on the url + enter, or click in a link. That's how I got only 1 request.

Clearly , your file is not stale as its max-age and expiry date are still valid and hence the browser will not communicate with server.The Browser doesn't ask for the file unless it is stale. i.e. its cache-control ( max -age) is over or Expiry date is gone. In that case it will ask the serve if the given copy is still valid or not. if yes, it will serve same copy, else it will get new one.
Update :
See, here is the thing. F5/refresh will always make browser to request the server if anything is modified or not. It will have If-Modified-Since in Request header. While it is different from just navigating the site, coming back to pages and click events in which browser will not ask server , and load from cache silently( no server call). Also, if you are testing on firefox Live HTTP Headers, it will show you exactly what is requested, while Firebug will always show you If-Modified-Since. Safari's developer menu should show load time as 0. Hope it helps.

Related

Is it possible to do cache busting with HTTP/2?

Has anybody tried?
Here is the use case. In a first request-response cycle, this would happen:
Request 1:
GET / HTTP/1.1
...
Response 1
HTTP/1.0 200 OK
Etag: version1
Cache-control: max-age=1
... angly html here
....<link href="mycss.css" >
...
Request 2:
GET /mycss.css HTTP/1.1
...
Response 2 (probably pushed):
Etag: version1
Cache-control: max-age=<duration-of-the-universe>
...
... brackety css ...
...
and then, when the browsers goes a second time to the same page, it will of course fetch again the "/" resource because of the very short max-age:
GET / HTTP/1.1
...
If-not-modified: version1
But it won't fetch mycss.css if it has it in cache. However, the server can use the validator present in the "if-not-modified" header of the request for "/" to get an idea of the client's cache age, and may conclude that mycss.css version's of the browser is too old. In that case, before even answering the previous request, the server can "promise" a new version of mycss.css/
By the specs, should the browser accept and use it?
Overview:
I still don't know what the answer to my question is from a purely theoretical side, but at least today it doesn't seem possible in practice to do cache-busting this way :-(, with neither Google Chrome or Firefox. Both reject or ignore the pushed stream if they believe that the resource they have in cache is fresh.
I also got this from somebody who prefers to remain anonymous:
Browsers will typically put resources received through push in a
"demilitarized zone" and only once the client asks for that resource
it will be moved into the actual cache. So just pushing random
things will not make them end up in the browser cache even if the
browser accepts them at the push moment.
Update
As early 2016, it is still not possible, due mainly to lack of consensus on how this should be handled, and if it should be allowed at all or not.
As this page shows, even with HTTP/2, the way to solve the stale assets issue is to create a unique URL for each asset version, and then ensure that the user receives that new URL when they re-visit the page.

Force chrome to cache static content like images

I want to improve my experience with the internet by caching static content like images(jpg,png,gif) and fonts. Because always happens that when watching a webpage with a lot of images, and then I refresh with F5, the same contents are downloaded again.
I know that it's because the response headers could contain no cache o max-age 0, and even sometimes it happens when there is no cache o max-age in the response.
But in case of images or fonts that never change, it's useless to get max-age 0. So I wanted to know if there is a way to override the response headers and set them with max-age 1 year. Maybe with a chrome extension?
Yes you can do this by using a Chrome Extension. See this Change HTTP Headers chrome extension already does it.
For your specific case, you just need to do this:
Add an event listener which should be called whenever headers are received.
Read the details of the headers
Check if response content type is image
Add/Update the desired header to the headers
To accomplish this you can use webRequest Headers Received event.
Documentation of onHeadersReceived
onHeadersReceived (optionally synchronous): Fires each time that an HTTP(S) response header is received. Due to redirects and authentication requests this can happen multiple times per request. This event is intended to allow extensions to add, modify, and delete response headers, such as incoming Set-Cookie headers.
Your code will look something like this
chrome.webRequest.onHeadersReceived.addListener(function(details){
for(var i = 0; i < details.responseHeaders.length; i++) {
// If response is of image, add the cache-control header
}
return {responseHeaders: details.responseHeaders};
},
{urls: ['https://*/*'], types: ['image'] },
['blocking', 'responseHeaders']);
PS: I have not run and tested the code so please excuse the typos.
EDIT (After #RobW comment)
No, this is not possible as of now (22 march 2014). Adding Cache-control has no influence on the caching behavior. Check out this answer for more details.
Albeit this is an old question, I stumbled upon it recently. Later I found the chrome extension "Speed-Up Browsing" which seems to do exactly what the OP asked for.
For me it worked.
https://chrome.google.com/webstore/detail/speed-up-browsing/hkhnldpdljhiooeallkmlajnogjghdfb?hl=en
You could use Fiddle https://www.telerik.com/fiddler (free) as proxy adding caching for selected URLs or patterns. I did that a

Disable cache in ExtLib REST control (which uses dojox.data.JsonRestStore)

In my XPage I have a xe:djxDataGrid (dojox.grid.datagrid) which uses xe:restService which seems to use dojox.data.JsonRestStore.
Everything works fine without proxy but my client accesses the application via a proxy because of corporate policy. After a user updates data in the DataGrid it shows old values when accessed behind the proxy.
When the REST Control/JsonRestStore sends an ajax GET request to get data, there is no Cache-Control parameter in request headers. And Domino does not place Expires parameter in the reponse headers. I believe that's why the old version of the GET request gets cached by the proxy.
We have tried to disable cache in browsers but that does not help which indicates the proxy is caching the requests.
I believe this could be solved either by:
Setting Cache-Control parameter in request headers OR
Setting Expires parameter in response headers
But I haven't found a way to set either of these. For the XPage Domino sets Expires:-1 response header but not for the ajax GET request which is:
/mypage.xsp/?$$viewid=!ddrg6o7q1z!&$$axtarget=view:_id1:_id2:callback1:restService1
This returns the JSON data to JsonRestStore and gets cached by the proxy.
One options is to try to get an exception to the proxy so requests to this site would bypass the proxy cache. But exceptions are generally not easy to get thru.
Any ideas? Thanks.
Update1
My colleque suggested that I could intercept the xhr GET requests made by dojox.data.JsonRestStore and add a time parameter to the URL to prevent cache. Here is my question about that:
Prevent cache in every Dojo xhr request on page
Update2
#SvenHasselbach has a great solution for preventing cache for all xhrs:
http://openntf.org/XSnippets.nsf/snippet.xsp?id=cache-prevention-for-dojo-xhr-requests
It seems to work perfectly, &dojo.preventCache= parameter is added to the URLs and the requests seem to return correct JSON also with this parameter. But the DataGrid stops working when I use that code. Every xhr causes this error:
Tried with Firefox and Chrome. The first page of data still loads because xhr interception is not yet in place but the subsequent pages show only "..." in each cell.
The solution is Sven Hasselbach's code in the comment section of Julian Buss's blog which needs to be slightly modified.
I changed xhrPost to xhrGet and did not place the code to dojo.addOnLoad. When placed there it was not effective in the first XHR by the DataGrid/Store.
I also removed the headers modification because it overrides existing headers. When the REST control requests data from server with xhrGet the URL is always the same and rows requested are in HTTP header like this:
Range: items=0-9
This (and other) headers disappear when the original code is used. To just add headers we would have take the existing headers from args and append to them. I didn't see a need for that because it should be enough to add the parameter in the URL. Here is the extremely simple code I'm using:
if( !(dojo._xhrGet )) {
dojo._xhrGet = dojo.xhrGet;
}
dojo.xhrGet = function (args) {
args['preventCache'] = true;
return dojo._xhrGet(args);
}
Now I'm getting all rows and all XHR Get URLs have &dojo.preventCache= parameter which is exactly what I wanted. Next we'll test in customer environment to see if this solves their problem.
Update
As Julian points out in his blog I could also use a Web Site Rule to set Expires or cache-control http response headers.
Update
The customer reports it's working now for them!

Serving content depending on http accept header - caching problems?

I'm developing an application which is supposed to serve different content for "normal" browser requests and AJAX requests for the same URL requested.
(in fact, encapsulate the response HTML in JSON object if the request is AJAX).
For this purpose, I'm detecting an AJAX request on the server side, and processing the response appropriately, see the pseudocode below:
function process_response(request, response)
{
if request.is_ajax
{
response.headers['Content-Type'] = 'application/json';
response.headers['Cache-Control'] = 'no-cache';
response.content = JSON( some_data... )
}
}
The problem is that when the first AJAX request to the currently viewed URL is made strange things happens on Google Chrome - if, right after the response comes and is processed via JavaScript, user clicks some link (static, which redirects to other page) and then clicks back button in the browser, he sees the returned JSON code instead of the rendered website (logging the server I can say that no request is made). It seems for me that Chrome stores the latest request response for the specific URL, and doesn't take into account that it has different content-type etc.
Is that a bug in the Chrome or am I misusing HTTP protocol ?
--- update 12 11 2012, 12:38 UTC
following PatrikAkerstrand answer, I've found following Chrome bug: http://code.google.com/p/chromium/issues/detail?id=94369
any ideas how to avoid this behaviour?
You should also include a Vary-header:
response.headers['Vary'] = 'Content-Type'
Vary is a standard way to control caching context in content negotiation. Unfortunately it has also buggy implementations in some browsers, see Browser cache vary broken.
I would suggest using unique URLs.
Depending of you framework capabilities you can redirect (302) the browser to URL + .html to force response format and make cache key unique within browser session. Then for AJAX requests you can still keep suffix-less URL. Alternatively you may suffix AJAX URL with .json instead .
Another options are: prefixing AJAX requests with /api or adding some cache boosting query params ?rand=1234.
Setting cache-control to no-store made it in my case, while no-cache didn't. This may have unwanted side effects though.
no-store: The response may not be stored in any cache. Although other directives may be set, this alone is the only directive you need in preventing cached responses on modern browsers.
Source: Mozilla Developer Network - HTTP Cache-Control

What does status=canceled for a resource mean in Chrome Developer Tools?

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

Resources