I'd like to speed up the initial loading of a site. It requests several API endpoints during inital render. I want to add <link rel="preload" /> for a few of these requests to make them start loading earlier. However these API responses are not cacheable by the browser. So the question is: How the browser behaves in such case? Will it fetch the content again regardless of the preload due to the no-cache headers or it's smart enough to relize that I need exactly that preloaded content?
So it turns out it's respecting no-cache headers as expected. I cannot preload such using <link rel="preload">. The solution is to add a few second TTL.
Related
My site loads external javascript with relative protocol, i.e.
<script type="text/javascript" src="//somewhere.com/script.js"></script>
(Note: the script tag is injected asynchronously to fetch the script after page load.)
but my dns-prefetch tag is absolute protocol, i.e.
<link rel="dns-prefetch" href="http://somewhere.com/script.js">
so when the site is loaded over HTTPS the prefetch is http and the script is https. There is no warning in the Chrome console about this.
Besides keeping these consistent, is there any performance benefit to changing the dns prefetch link to relative protocol?
One thought I had was that because all dns prefetch supposedly does is resolve an IP from a hostname, it might actually be beneficial to use http in the prefetch to avoid needing to do the SSL handshake. But this assumes the dns-prefetch link instructs the browser to make a network request, which I'm not sure is what's happening.
The following three lines, when supported by the browser, do the same:
<link rel="dns-prefetch" href="http://SERVERNAME/some.script.js">
<link rel="dns-prefetch" href="https://SERVERNAME/some.script.js">
<link rel="dns-prefetch" href="//SERVERNAME/some.script.js">
They all try to request A and AAAA resource records from the DNS resolver, if such information is not already present in the browser name service cache.
Therefore, the performances are the same.
I am trying to to use a texture from my own hosted webserver but putting it into the asset-item tag I get the following error.
> Access to Image at 'http://192.168.137.1:3000/cat2.jpg' from origin
> 'http://localhost' has been blocked by CORS policy: No
> 'Access-Control-Allow-Origin' header is present on the requested
> resource. Origin 'http://localhost' is therefore not allowed access.
The picture is accessible, since I can see it in the webinspector.
It works perfectly in a simple image tag. Does anyone know what to do here?
Thanks!
Update: My code you could find below:
<script src="https://aframe.io/releases/0.5.0/aframe.min.js"></script>
<a-scene>
<a-assets>
<img id="cat" src="http://192.168.x.x:3000/cat.jpg"/>
</a-assets>
<a-sky src="#cat"/> <!-- this code works not (CORS) -->
<a-sky src="http://192.168.x.x:3000/cat.jpg" /> <!-- this code works not (CORS) -->
</a-scene>
<img id="cat" src="http://192.168.x.x:3000/cat.jpg"/> <!-- this code works -->
Solution:
I figured out the main problem: It had nothing to do with A-Frame itself, it was a minor mistake on the server. The headers were specified after the fileserver was initialized. Putting the specification in the initialization phase did the trick... of course... :-D
What's CORS?
This is not A-frame or Three.js or WebVR that is an issue. CORS (Cross-origin resource sharing) happens when the JavaScript (in your situation is that this script https://aframe.io/releases/0.5.0/aframe.min.js ) makes a cross-domain XHR (XMLHttpRequest) call (in your situation is that to http://192.168.x.x:3000/cat.jpg ).
On Wikipedia I've found an image that gives more information about the workflow of CORS.
Your request is a GET-request, there are custom HTTP headers and didn't add Acces-Control-* headers, result an error.
More information about CORS I've found on the Mozilla Developer Network.
Documentation from A-frame
Why does my asset (e.g., image, video, model) not load?
First, if you are doing local development, make sure you are using a local server so that asset requests work properly.
If you are loading the asset from a different domain (and that you do), make sure that the asset is served with cross-origin resource sharing (CORS) headers. You could either find a host to serve the asset with CORS headers, or place the asset on the same domain (directory) as your application.
Why is this happen?1
It looks like the script (https://aframe.io/releases/0.5.0/aframe.min.js ) that must be added, loads the images and that's why <a-sky src="http://192.168.0.253:457/cat.jpg" /> is not working at all. Because the image is loaded from the script that is hosted on A-frame.
If you use <a-assets><img src="http://192.168.0.253:457/cat.jpg" /></a-assets>, the image URL is bound to the a-skys src-attribute. And again the image is loaded from the script on A-frames server and makes a cross-domain XHR call.
1 I'm not 100% sure, but there is a big chance that it's correct. If anyone think that this is not correct, please say it. If it is correct, please say it also.
Solutions
Place the file on your local host web server.
Add the response header Access-Control-Allow-Origin when the image is requested. I think, the value must be http://aframe.io.
After many trial and error, I finally found a way to incorporate images from remote server to my local server without facing CORS errors. The solution is using a CORS proxy instead of doing direct request.
Despite the following code is not the most elegant solution, it works for me.
<!DOCTYPE html>
<html>
<head>
<title>
</title>
<script src="https://aframe.io/releases/0.9.2/aframe.min.js"></script>
</head>
<body>
<a-scene>
<a-assets>
<img id="frodo" crossorigin="anonymous" src="https://cors-anywhere.herokuapp.com/http://i.dailymail.co.uk/i/pix/2011/01/07/article-1345149-0CAE5C22000005DC-607_468x502.jpg">
</a-assets>
<!-- Using the asset management system. -->
<a-image src="#frodo"></a-image>
</a-scene>
</body>
</html>
Using CORS Proxy, adds all the headers needed to perform the request to the remote server and gather the objects in the src field.
Please note that the src request is: https://cors-anywhere.herokuapp.com/<url_you_are_looking_for>
clicking on a element in a page, my application change the src attribute of an img element.
I don't understand why firefox gets the images with this http headers:
pragma no-cache Cache-Control: no-cache
avoiding firefox to use his own cache.
Chrome, for example, doesn't.
Thank you
Luca
Before answer I should add some detail:
1. My application is behind Tomcat
2. I saw that that headers was added only for file bigger than 512KB
Solution was to modify server.xml in order to have bigger cache object
<Context cacheMaxSize="40960" cacheObjectMaxSize="2148" docBase="\\mypath\docbase" path="/graphics" />
see:
http://tomcat.apache.org/tomcat-6.0-doc/config/context.html#Standard_Implementation
I'm hosting a static website in S3 and using Cloudfront to cache files. I've essentially got 3 files with the following headers:
index.html (Cache-Control: no-cache)
app.js (Cache-Control: max-age=63072000, public)
style.css (Cache-Control: max-age=63072000, public)
My html file uses query string parameters that get updated every time I update my css or js files. I've configured s3 to pass these parameters through and I've verified that it works to invalidate cached resources. My index.html file looks something like this:
<html>
<head>
...
<link rel="stylesheet" href="app.css?v=14113e2c764">
</head>
<body>
...
<script src="app.js?v=14113e2c764"></script>
</body>
</html>
It seems to work great as I push updates all day, but when I come in the next morning and push my next change, The index.html file is out of date. Instead of having the correct ?v= parameter, it has the old one! The only way to fix it is to invalidate the html file manually. Then everything works for the rest of the day. The next day I have the same problem again.
What's going on here?
Verify that the CloudFront distribution's Minimum TTL is set to 0. If it's set to any other value, CloudFront won't respect the no-cache header and will still cache the file for the Minimum TTL. More details about the caching directives can be found here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
If this doesn't help, try to debug the actual HTTP request for index.html and post the response headers here so we can have a look at them.
Also, instead of using no-cache for the index.html file, you can try using
public, must-revalidate, proxy-revalidate, max-age=0
This will allow CloudFront to store the file on the edge location, but it will force it to revalidate it with the origin with each request. If the file hasn't changed, CloudFront will not need to transfer the file's entire content from the origin. This can speed up the response time, especially for larger files.
This is more of a comment, but a bit too long. Hopefully helps others that land here.
Cache busting via query parameter has drawbacks, although perhaps you can combat them all via Cloudfront behaviors. See https://stackoverflow.com/a/24166106/630614. Still, I would recommend unique filenames e.g. app.css?v=14113e2c764 becomes app.14113e2c764.css.
To respond to BradLaney's comment/issue: If you've updated cache-control headers and don't see the changes, it's because the origin item is already cached – invalidate it and you should see the new headers the next time you view the resource.
Regarding race condition when setting cache-control for S3 items, or just setting cache-control in general for an SPA, this is what's working well for my team:
# Sync all files with 1 week cache-control, excluding .html files.
aws s3 sync --cache-control 'max-age=604800' --exclude *.html dist/ s3://$AWS_BUCKET/
# Sync remaining .html files with no cache.
aws s3 sync --cache-control 'no-cache' dist/ s3://$AWS_BUCKET/
Every time I do I search on this I get information about how to disable the browser cache.
Never anything about enabling it.
How do I get the back button to use the cache and not regenerate the page?
As far as I know you can control to force a browser to reload the data by means of these meta tags:
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Cache-control" content="no-cache">
<meta http-equiv="Expires" content="0">
but you cannot force it to read from cache. The browser itself will do that for you if you don't explicitly specify to ignore the cache, and the page data are in fact cached and not expired.
This does not depend on CodeIgniter because it's client-side, but you might want to use the meta() function included in CI's html helper, which will simply output the corresponding meta tag. e.g:
echo meta('Cache-control', 'no-cache', 'http-equiv');
would generate the second code line above.
Note:
The 1st meta tag is specified for http/1.0 while the 2nd one is for http/1.1 but both are used to allow backwards compatibility.
If you're using xhtml instead of html remember to close the meta tags with />
Browser caching has nothing to do with codeigniter. You can use html meta tags to instruct the browser specifically not to cache pages or you can set a cache expiry for an individual page like so:
<meta http-equiv="expires" content="Mon, 10 Dec 2001 00:00:00 GMT" />
You could use a bit of php to drop tomorrows date in there. The browser (depending on settings) will usually pull as much as it can from the cache automatically, including when clicking the back button - the cache for the back button will work the same as if you were coming in from any other link.
You could set expires headers through your htaccess using something like the following on an apache server (you would have to ask about how to do this on other server types) to tell the browser that is should cache certain types of content for a given periods of time:
ExpiresByType text/html "access plus 60 seconds"
This will tell the browser to store anything of mime type text/html for 60 seconds (this includes codeigniter output) BUT DONT DO THIS if your dealing with dynamic content It will stop any dynamic page content being loaded and will stop any changes to your content being loaded by returning visitors (Obviously this second part is not such an issue with a 60 second cache).
The key thing to realise is that Your page is not one thing, it's made up of lots of parts, some of these parts should be called from cache (js, css, images, etc.) some should not (often html will fall into this category).
The browser will automatically call all the parts of your page from the cache where the cache has not expired.
Usually you would use .htaccess (or similar method) to cache your css, images, etc. (using versioning in filenames to force a reload when they change).
You should also take advantage of server side caching - codeigniter does this for whole pages but I dont tend to find this very helpful for any kind of dynamic site so I would take a look at for using phil sturgeons partial caching library for ci if you are interested in ss caching:
https://github.com/philsturgeon/codeigniter-cache
This wont stop a request being sent to the server but will mean that request requires less processing and can be served as one or several pieces of static content.