Chrome doesn't cache images/js/css - caching

When Chrome loads my website, it checks the server for updated versions of files before it shows them. (Images/Javascript/CSS) It gets a 304 from the server because I never edit external javascript, css or images.
What I want it to do, is display the images without even checking the server.
Here are the headers:
Connection:keep-alive
Date:Tue, 03 Aug 2010 21:39:32 GMT
ETag:"2792c73-b1-48cd0909d96ed"
Expires:Thu, 02 Sep 2010 21:39:32 GMT
Server:Apache/Nginx/Varnish
How do I make it not check the server?

Make sure you have the disable cache checkbox unchecked/disabled in the Developer Tools.

What do your request headers look like?
Chrome will set max-age:0 on the request's Cache-Control header if you press Enter in the location bar. If you visit your page using a hyperlink, it should use the cache, as expected.

Wow! I was facing the same issue for quite some time.
I'll tell you why you were facing this issue. Your headers are just fine. You receive a 304 because of the way you are trying to refresh the page. There a mainly 3 ways -
Press enter in the address box. You will observe chrome reads the file from the cache first and does not go to the server at all.
Press f5, this would verify if the file has become stale (probably that is how you are refreshing)
Press Ctrl+f5, this is unconditional reload of all static resources.
So basically - you should press the return key in the address bar. Let me know if this works.

For me, it was self-signed certificate:
https://code.google.com/p/chromium/issues/detail?id=110649
In the above link the Chromium Developer marked the bug: #WontFix because the rule is: "Any error with the certificate means the page will not be cached."
Therefore Chrome doesn't cache resources from servers with self-signed certificate.

If you want Chrome to cache your JS/CSS files - the server will need to set a "Cache-Control" header. It should look like:
Cache-Control:max-age=86400 (if you want to cache resources for a day).

I believe you are looking for
Cache-Control: immutable

Related

Web page content is not gzipped on every computer

I have configured my Apache xampp server to serve gzipped content as described [here][1]. This works successfully on some Firefox and Chrome browsers I have tried on different PCs (Windows and Ubuntu). I could verify this by looking at Network tab in DevTools on Firefox and Chrome browser, where I can see the reduced size that is transferred and also the header Content-Encoding :gzip I also passed the [GIDZipTest][2].
The problem is that for my PC and also another laptop I found (Windows 10) the content is not received as gzip by any browser, although the browsers send the request header that they accept gzip. The weird thing is that when I tested this in the Firefox browser of my Ubuntu VM it receives gzipped content but when I test this in the browser of the PC that hosts the VM it does not receive gzipped content.
I attach some pictures.
Firefox on my PC[3], Chrome on my PC[4], Firefox on VM[5]
https://ourcodeworld.com/articles/read/503/how-to-enable-gzip-compression-in-xampp-server
http://www.gidnetwork.com/tools/gzip-test.php
https://i.stack.imgur.com/9KLSO.png
https://i.stack.imgur.com/gcLsW.png
https://i.stack.imgur.com/UO9fA.png
Finally it worked by replacing http with https..I don't know why this is required, since I'm using exactly the same version of browser in both cases!
p.s.CBroe was right. If I look request headers more carefully I can see that in the browser that was not accepting gzip content, the header Accept-Encoding had one more value apart from gzip and deflate, which is br aka Brotli which seems to support compression only for https! Could this probably be the explanation? Although I didn't make any configuration in xampp for Brotli..

IE8(+win7) can't download a file which contains no-cache in HTTP header

I found that specific client(win7 + IE8) can't download a file(PDF file)
which contains Cache-Control:no-cache in HTTP header;
http://www.doosan.com/doosaniv/download.do?path=product&sav=225806754671.pdf&ori=d70s-5_plus.pdf&dir=20110630
But if the header contains Cache-Control:no-cache="set-cookie, there's no problem to download.
http://www.doosan.com/doosaniv/download.do?path=product&sav=225515770296.pdf&ori=d18s-5.pdf&dir=20110630
And.. in the first situation, If I run IE8 as Administrator, got no problem to download..
(Note that I logon as Administrator in win7. It's weird..)
I fount a blog and it says SSL and no-cache. I think it's similar but different problem.
Thank you.
Thank you for posting this question. The links and examples were very helpful in solving other problems.
From the MSDN article you link to:
"if a user tries to download* a file over a HTTPS connection, any response headers that prevent caching will cause the file download process to fail."
I'm guessing that IE8 doesn't respect Cache-Control:no-cache="set-cookie" as a proper header, and thus believes there is nothing preventing cache and the download is allowed to continue.

Chrome Caches for Too Long

On my website, www.johnshammas.com, it works perfectly in all browsers. Except...anyone that has viewed the previous version on Chrome is stuck with that version until they empty their cache. What would cause the website to return a "not modified" header when in reality it has been modified heavily?
If a 304 Not Modified response was returned, it was because earlier the server sent a response with an ETag or a Last-Modified header.
Later, the browser sent this value as an ETag or If-Modified-Since header. The server recognized the ETag or date such that the resource had not changed since the browser last requested it.
So it returned a 304.
If you are not familiar with these or other cache headers, I recommend doing some research on them. There are many great tutorials on what these are and how to use them.
A few possible solutions go like this...
No 1 (Permanent)
F12 for Dev Tools > Gear symbol for settings in the lower right > Network > Check "Disable cache"
No 2 (Semi-Permanent)
Switch to Incognito mode via Ctrl + Shift + N. But watch out as this
also ends your session.
No 3 (One-Time)
Ctrl + Shift + Del > Confirm
No 4 (One-Time)
F12 for Dev Tools > Network Tab > Right Click content area >
Clear browser cache > Confirm.
The problem is that Chrome needs to have must-revalidate in the Cache-Control` header in order to re-check files to see if they need to be re-fetched.
Recommend the following response header:
Cache-Control: must-validate
This tells Chrome to check with the server, and see if there is a newer file. IF there is a newer file, it will receive it in the response. If not, it will receive a 304 response, and the assurance that the one in the cache is up to date.
If you do NOT set this header, then in the absence of any other setting that invalidates the file, Chrome will never check with the server to see if there is a newer version.
Here is a blog post that discusses the issue further.

How can I validate http response headers?

It's the first time I am doing something with headers. I am mainly concerned with Cache-Control but there may be others I will need to check as well. For example, I try to send the following header to the browser (based on tutorials I just read):
Cache-Control:private, max-age=2011-12-30 11:40:56
Google Chrome displays it this way in Network -> Headers -> Response headers, but how do I know if it's correct, that there aren't any typos, syntax errors and such? Will it really work? Will the browser behave like I want it to, or will it treat it like a gibberish (something like "unknown header/value")? I've tried sending nonsensical headers on purpose but they got displayed with the rest. Is there any Chrome tool / addon for that, or any other way? Thank you in advance!
I'm afraid you won't be able to check if the resource has been cached by proxies en route, but you can check if your browser has cached it.
While in the Network panel of Chrome DevTools, hit F5 to reload your page. You should see something like "304 Not Modified" in the status field for the resource you are treating (which means the resource has not been modified and its contents were not received from the server but rather loaded from the browser's cache.)

Firefox "intelligently" and silently fixes incorrect file references in CSS and Scripts at runtime. Driving me nuts!

Well this is a really weird issue, I really didn't find anything on this elsewhere so I thought I'd address it here.
Say I have an "image.jpg" and accidentally reference it in the CSS like so:
url(imag.jpg)
Note the missing "e". Now for me, Firefox is so incredibly clever that it will still find the correct image, but not spit out a warning. So I assume that everything is ok.
But later, when I test the page in any other browser, all of a sudden the image doesn't display (and rightly so). That's because Firefox thought it was a good idea to correct my error without telling me.
This becomes more critical with scripts. Firefox will also auto-correct a typo in a <script src=""> reference.
I just wasted a whole hour scratching my head and trying to debug an ajax function in Webkit - turns out, I just had a typo where I included the file.
Why on earth does Firefox do this without telling, and where the heck can I turn this off? This has first occured somewhere around FF 3.0 and still persists in 3.6.3.
/rant an thank fo any inpu ;)
EDIT: Thanks for your answers so far. I've uploaded a demo
EDIT 2: Thanks to the great input below, I found out that it was my server having the CheckSpelling module on (Apache). Solution: Add
CheckSpelling OFF
to the .htaccess and that fixes it. Thanks again to all.
PS. I'm sorry that I blamed you, Firefox. You're still the best!
I don't think this has anything to do with Firefox. Your script also gets included in IE, which leads me to believe your web server is redirecting the request to the real file, not Firefox. What web server are you using? IIS?
When I browse to http://soapdesigned.com/firefox-test/scrip.js in IE, I get prompted to download script.js, the correct file.
Update:
After examining with Fiddler, when I request scrip.js, I get HTTP 301 (Moved Permanently).
I think what you're seeing is mod_speling (or something like it) in action:
http://httpd.apache.org/docs/2.0/mod/mod_speling.html
It's an apache module meant for correcting minor mis-spellings.
Requests to documents sometimes cannot be served by the core apache server because the request was misspelled or miscapitalized. This module addresses this problem by trying to find a matching document, even after all other modules gave up. It does its work by comparing each document name in the requested directory against the requested document name without regard to case, and allowing up to one misspelling (character insertion / omission / transposition or wrong character). A list is built with all document names which were matched using this strategy.
This is not Firefox, it is something in your server:
~% curl -v -o/dev/null http://soapdesigned.com/firefox-test/scrip.js
* About to connect() to soapdesigned.com port 80 (#0)
* Trying 82.165.116.124... connected
* Connected to soapdesigned.com (82.165.116.124) port 80 (#0)
> GET /firefox-test/scrip.js HTTP/1.1
> User-Agent: curl/7.19.5 (x86_64-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
> Host: soapdesigned.com
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Sat, 17 Apr 2010 22:44:08 GMT
< Server: Apache
< Location: http://soapdesigned.com/firefox-test/script.js
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=iso-8859-1
<
* Connection #0 to host soapdesigned.com left intact
* Closing connection #0
Your server probably is running mod_spelling, which detects failed requests to nonexistent files and tries to redirect to other files with similar spellings.

Resources